Mar 17 17:59:42.878402 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:59:42.878422 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:59:42.878430 kernel: BIOS-provided physical RAM map: Mar 17 17:59:42.878436 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:59:42.878441 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:59:42.878446 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:59:42.878452 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Mar 17 17:59:42.878457 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Mar 17 17:59:42.878465 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 17:59:42.878470 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 17:59:42.878475 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:59:42.878480 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:59:42.878485 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:59:42.878490 kernel: NX (Execute Disable) protection: active Mar 17 17:59:42.878499 kernel: APIC: Static calls initialized Mar 17 17:59:42.878504 kernel: SMBIOS 3.0.0 present. Mar 17 17:59:42.878510 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Mar 17 17:59:42.878516 kernel: Hypervisor detected: KVM Mar 17 17:59:42.878521 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:59:42.878526 kernel: kvm-clock: using sched offset of 2995159411 cycles Mar 17 17:59:42.878532 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:59:42.878538 kernel: tsc: Detected 2445.406 MHz processor Mar 17 17:59:42.878544 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:59:42.878552 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:59:42.878558 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Mar 17 17:59:42.878563 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:59:42.878569 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:59:42.878575 kernel: Using GB pages for direct mapping Mar 17 17:59:42.878580 kernel: ACPI: Early table checksum verification disabled Mar 17 17:59:42.878586 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Mar 17 17:59:42.878591 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:59:42.878597 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:59:42.878605 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:59:42.878610 kernel: ACPI: FACS 0x000000007CFE0000 000040 Mar 17 17:59:42.878688 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:59:42.878697 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:59:42.878703 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:59:42.878709 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:59:42.878714 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Mar 17 17:59:42.878720 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Mar 17 17:59:42.878732 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Mar 17 17:59:42.878738 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Mar 17 17:59:42.878744 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Mar 17 17:59:42.878750 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Mar 17 17:59:42.878756 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Mar 17 17:59:42.878762 kernel: No NUMA configuration found Mar 17 17:59:42.878770 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Mar 17 17:59:42.878776 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Mar 17 17:59:42.878782 kernel: Zone ranges: Mar 17 17:59:42.878788 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:59:42.878793 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Mar 17 17:59:42.878799 kernel: Normal empty Mar 17 17:59:42.878805 kernel: Movable zone start for each node Mar 17 17:59:42.878811 kernel: Early memory node ranges Mar 17 17:59:42.878817 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:59:42.878823 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Mar 17 17:59:42.878830 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Mar 17 17:59:42.878836 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:59:42.878842 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:59:42.878848 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 17:59:42.878854 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:59:42.878859 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:59:42.878865 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:59:42.878871 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:59:42.878877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:59:42.878885 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:59:42.878891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:59:42.878897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:59:42.878903 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:59:42.878908 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:59:42.878914 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:59:42.878920 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:59:42.878926 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 17:59:42.878932 kernel: Booting paravirtualized kernel on KVM Mar 17 17:59:42.878940 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:59:42.878946 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:59:42.878952 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:59:42.878958 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:59:42.878963 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:59:42.878969 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 17:59:42.878976 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:59:42.878982 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:59:42.878990 kernel: random: crng init done Mar 17 17:59:42.878996 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:59:42.879002 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:59:42.879007 kernel: Fallback order for Node 0: 0 Mar 17 17:59:42.879013 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Mar 17 17:59:42.879019 kernel: Policy zone: DMA32 Mar 17 17:59:42.879025 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:59:42.879031 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 125152K reserved, 0K cma-reserved) Mar 17 17:59:42.879037 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:59:42.879045 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:59:42.879051 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:59:42.879057 kernel: Dynamic Preempt: voluntary Mar 17 17:59:42.879063 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:59:42.879069 kernel: rcu: RCU event tracing is enabled. Mar 17 17:59:42.879075 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:59:42.879081 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:59:42.879087 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:59:42.879093 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:59:42.879099 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:59:42.879107 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:59:42.879113 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 17:59:42.879119 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:59:42.879124 kernel: Console: colour VGA+ 80x25 Mar 17 17:59:42.879130 kernel: printk: console [tty0] enabled Mar 17 17:59:42.879136 kernel: printk: console [ttyS0] enabled Mar 17 17:59:42.879142 kernel: ACPI: Core revision 20230628 Mar 17 17:59:42.879148 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:59:42.879154 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:59:42.879161 kernel: x2apic enabled Mar 17 17:59:42.879167 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:59:42.879173 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:59:42.879179 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:59:42.879185 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Mar 17 17:59:42.879191 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:59:42.879197 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:59:42.879203 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:59:42.879217 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:59:42.879223 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:59:42.879230 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:59:42.879236 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:59:42.879244 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:59:42.879250 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:59:42.879256 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:59:42.879263 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:59:42.879269 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:59:42.879277 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:59:42.879284 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:59:42.879290 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:59:42.879296 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:59:42.879302 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:59:42.879309 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:59:42.879315 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:59:42.879321 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:59:42.879329 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:59:42.879335 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:59:42.879341 kernel: landlock: Up and running. Mar 17 17:59:42.879348 kernel: SELinux: Initializing. Mar 17 17:59:42.879354 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:59:42.879360 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:59:42.879366 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:59:42.879372 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:59:42.879379 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:59:42.879387 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:59:42.879393 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:59:42.879399 kernel: ... version: 0 Mar 17 17:59:42.879405 kernel: ... bit width: 48 Mar 17 17:59:42.879411 kernel: ... generic registers: 6 Mar 17 17:59:42.879418 kernel: ... value mask: 0000ffffffffffff Mar 17 17:59:42.879424 kernel: ... max period: 00007fffffffffff Mar 17 17:59:42.879430 kernel: ... fixed-purpose events: 0 Mar 17 17:59:42.879436 kernel: ... event mask: 000000000000003f Mar 17 17:59:42.879444 kernel: signal: max sigframe size: 1776 Mar 17 17:59:42.879450 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:59:42.879456 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:59:42.879462 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:59:42.879468 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:59:42.879475 kernel: .... node #0, CPUs: #1 Mar 17 17:59:42.879481 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:59:42.879487 kernel: smpboot: Max logical packages: 1 Mar 17 17:59:42.879493 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Mar 17 17:59:42.879501 kernel: devtmpfs: initialized Mar 17 17:59:42.879507 kernel: x86/mm: Memory block size: 128MB Mar 17 17:59:42.879514 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:59:42.879520 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:59:42.879526 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:59:42.879532 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:59:42.879538 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:59:42.879545 kernel: audit: type=2000 audit(1742234382.599:1): state=initialized audit_enabled=0 res=1 Mar 17 17:59:42.879551 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:59:42.879559 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:59:42.879565 kernel: cpuidle: using governor menu Mar 17 17:59:42.879571 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:59:42.879577 kernel: dca service started, version 1.12.1 Mar 17 17:59:42.879584 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 17:59:42.879590 kernel: PCI: Using configuration type 1 for base access Mar 17 17:59:42.879596 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:59:42.879602 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:59:42.879608 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:59:42.879629 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:59:42.879635 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:59:42.879649 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:59:42.879655 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:59:42.879661 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:59:42.879667 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:59:42.879674 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:59:42.879680 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:59:42.879686 kernel: ACPI: Interpreter enabled Mar 17 17:59:42.879694 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:59:42.879701 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:59:42.879707 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:59:42.879713 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:59:42.879719 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:59:42.879725 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:59:42.879878 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:59:42.879992 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:59:42.880101 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:59:42.880110 kernel: PCI host bridge to bus 0000:00 Mar 17 17:59:42.880223 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:59:42.880320 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:59:42.880415 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:59:42.880509 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Mar 17 17:59:42.880602 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 17:59:42.880748 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 17:59:42.880847 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:59:42.880967 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:59:42.881080 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Mar 17 17:59:42.881183 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Mar 17 17:59:42.881286 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Mar 17 17:59:42.881394 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Mar 17 17:59:42.881496 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Mar 17 17:59:42.881600 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:59:42.881747 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:59:42.881853 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Mar 17 17:59:42.881964 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 17:59:42.882067 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Mar 17 17:59:42.882183 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 17:59:42.882286 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Mar 17 17:59:42.882396 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 17:59:42.882500 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Mar 17 17:59:42.882614 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 17:59:42.882762 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Mar 17 17:59:42.882874 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 17:59:42.882989 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Mar 17 17:59:42.883174 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 17:59:42.883279 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Mar 17 17:59:42.883392 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 17:59:42.883495 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Mar 17 17:59:42.883611 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:59:42.884389 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Mar 17 17:59:42.884505 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:59:42.884609 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:59:42.884815 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:59:42.884920 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Mar 17 17:59:42.885028 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Mar 17 17:59:42.885137 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:59:42.885239 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 17:59:42.885357 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:59:42.885464 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Mar 17 17:59:42.885572 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 17 17:59:42.885710 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Mar 17 17:59:42.887683 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 17:59:42.887806 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 17:59:42.887913 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 17 17:59:42.888033 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 17:59:42.888143 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Mar 17 17:59:42.888247 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 17:59:42.888376 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 17:59:42.888483 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 17:59:42.888599 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 17 17:59:42.890771 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Mar 17 17:59:42.890892 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Mar 17 17:59:42.891000 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 17:59:42.891104 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 17:59:42.891214 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 17:59:42.891332 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 17 17:59:42.891440 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 17 17:59:42.891544 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 17:59:42.891737 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 17:59:42.891846 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 17:59:42.891964 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 17:59:42.892080 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Mar 17 17:59:42.892188 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Mar 17 17:59:42.892291 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 17:59:42.892392 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 17:59:42.892494 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 17:59:42.892611 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 17 17:59:42.892928 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Mar 17 17:59:42.893038 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Mar 17 17:59:42.893147 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 17:59:42.893250 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 17:59:42.893353 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 17:59:42.893361 kernel: acpiphp: Slot [0] registered Mar 17 17:59:42.893475 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:59:42.893583 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Mar 17 17:59:42.895774 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Mar 17 17:59:42.895900 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Mar 17 17:59:42.896007 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 17:59:42.896111 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 17:59:42.896213 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 17:59:42.896221 kernel: acpiphp: Slot [0-2] registered Mar 17 17:59:42.896324 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 17:59:42.896427 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Mar 17 17:59:42.896529 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 17:59:42.896537 kernel: acpiphp: Slot [0-3] registered Mar 17 17:59:42.896668 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 17:59:42.896774 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 17:59:42.896875 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 17:59:42.896884 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:59:42.896891 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:59:42.896897 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:59:42.896903 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:59:42.896909 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:59:42.896919 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:59:42.896926 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:59:42.896932 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:59:42.896938 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:59:42.896944 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:59:42.896951 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:59:42.896957 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:59:42.896963 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:59:42.896969 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:59:42.896977 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:59:42.896984 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:59:42.896990 kernel: iommu: Default domain type: Translated Mar 17 17:59:42.896996 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:59:42.897002 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:59:42.897008 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:59:42.897015 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:59:42.897021 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Mar 17 17:59:42.897125 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:59:42.897231 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:59:42.897334 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:59:42.897343 kernel: vgaarb: loaded Mar 17 17:59:42.897350 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:59:42.897356 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:59:42.897363 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:59:42.897369 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:59:42.897375 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:59:42.897381 kernel: pnp: PnP ACPI init Mar 17 17:59:42.897501 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 17:59:42.897512 kernel: pnp: PnP ACPI: found 5 devices Mar 17 17:59:42.897518 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:59:42.897524 kernel: NET: Registered PF_INET protocol family Mar 17 17:59:42.897531 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:59:42.897537 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 17:59:42.897543 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:59:42.897550 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:59:42.897559 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 17 17:59:42.897566 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 17:59:42.897572 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:59:42.897578 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:59:42.897584 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:59:42.897591 kernel: NET: Registered PF_XDP protocol family Mar 17 17:59:42.899256 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 17:59:42.899374 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 17:59:42.899525 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 17:59:42.899672 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 17:59:42.899782 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 17:59:42.899886 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 17:59:42.899988 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 17:59:42.900091 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 17:59:42.900193 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 17 17:59:42.900295 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 17:59:42.900404 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 17:59:42.900507 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 17:59:42.900610 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 17:59:42.902769 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 17:59:42.902879 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 17:59:42.902982 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 17:59:42.903089 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 17:59:42.903208 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 17:59:42.903314 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 17:59:42.903416 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 17:59:42.903516 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 17:59:42.903669 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 17:59:42.903781 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 17:59:42.903884 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 17:59:42.903985 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 17:59:42.904087 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Mar 17 17:59:42.904195 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 17:59:42.904297 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 17:59:42.904398 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 17:59:42.904499 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Mar 17 17:59:42.904599 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Mar 17 17:59:42.906738 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 17:59:42.906851 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 17:59:42.906953 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Mar 17 17:59:42.907055 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 17:59:42.907158 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 17:59:42.907257 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:59:42.907356 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:59:42.907452 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:59:42.907545 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Mar 17 17:59:42.907676 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 17:59:42.907775 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 17:59:42.907883 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 17 17:59:42.907983 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 17 17:59:42.908095 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 17 17:59:42.908194 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 17:59:42.908299 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 17 17:59:42.908398 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 17:59:42.908504 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 17 17:59:42.908602 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 17:59:42.910787 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 17 17:59:42.910891 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 17:59:42.910997 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Mar 17 17:59:42.911095 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 17:59:42.911200 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Mar 17 17:59:42.911298 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 17 17:59:42.911395 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 17:59:42.911505 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Mar 17 17:59:42.911604 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Mar 17 17:59:42.911761 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 17:59:42.911874 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Mar 17 17:59:42.911973 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 17 17:59:42.912070 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 17:59:42.912083 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:59:42.912091 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:59:42.912097 kernel: Initialise system trusted keyrings Mar 17 17:59:42.912104 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 17:59:42.912111 kernel: Key type asymmetric registered Mar 17 17:59:42.912118 kernel: Asymmetric key parser 'x509' registered Mar 17 17:59:42.912124 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:59:42.912131 kernel: io scheduler mq-deadline registered Mar 17 17:59:42.912138 kernel: io scheduler kyber registered Mar 17 17:59:42.912144 kernel: io scheduler bfq registered Mar 17 17:59:42.912251 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 17 17:59:42.912354 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 17 17:59:42.912456 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 17 17:59:42.912557 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 17 17:59:42.914784 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 17 17:59:42.914986 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 17 17:59:42.915115 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 17 17:59:42.915241 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 17 17:59:42.915352 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 17 17:59:42.915455 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 17 17:59:42.915558 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 17 17:59:42.915717 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 17 17:59:42.915824 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 17 17:59:42.915927 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 17 17:59:42.916028 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 17 17:59:42.916130 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 17 17:59:42.916144 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:59:42.916245 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Mar 17 17:59:42.916348 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Mar 17 17:59:42.916357 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:59:42.916364 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Mar 17 17:59:42.916371 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:59:42.916377 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:59:42.916384 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:59:42.916391 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:59:42.916400 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:59:42.916407 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:59:42.916515 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 17:59:42.916612 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 17:59:42.917610 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T17:59:42 UTC (1742234382) Mar 17 17:59:42.917780 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 17:59:42.917792 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:59:42.917799 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:59:42.917810 kernel: Segment Routing with IPv6 Mar 17 17:59:42.917817 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:59:42.917823 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:59:42.917830 kernel: Key type dns_resolver registered Mar 17 17:59:42.917837 kernel: IPI shorthand broadcast: enabled Mar 17 17:59:42.917843 kernel: sched_clock: Marking stable (1079008041, 133147786)->(1219458007, -7302180) Mar 17 17:59:42.917850 kernel: registered taskstats version 1 Mar 17 17:59:42.917857 kernel: Loading compiled-in X.509 certificates Mar 17 17:59:42.917863 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:59:42.917872 kernel: Key type .fscrypt registered Mar 17 17:59:42.917878 kernel: Key type fscrypt-provisioning registered Mar 17 17:59:42.917885 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:59:42.917892 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:59:42.917900 kernel: ima: No architecture policies found Mar 17 17:59:42.917907 kernel: clk: Disabling unused clocks Mar 17 17:59:42.917913 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:59:42.917920 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:59:42.917928 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:59:42.917935 kernel: Run /init as init process Mar 17 17:59:42.917941 kernel: with arguments: Mar 17 17:59:42.917948 kernel: /init Mar 17 17:59:42.917955 kernel: with environment: Mar 17 17:59:42.917961 kernel: HOME=/ Mar 17 17:59:42.917967 kernel: TERM=linux Mar 17 17:59:42.917974 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:59:42.917982 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:59:42.917993 systemd[1]: Detected virtualization kvm. Mar 17 17:59:42.918001 systemd[1]: Detected architecture x86-64. Mar 17 17:59:42.918007 systemd[1]: Running in initrd. Mar 17 17:59:42.918014 systemd[1]: No hostname configured, using default hostname. Mar 17 17:59:42.918021 systemd[1]: Hostname set to . Mar 17 17:59:42.918028 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:59:42.918035 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:59:42.918042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:59:42.918051 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:59:42.918059 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:59:42.918067 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:59:42.918074 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:59:42.918081 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:59:42.918089 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:59:42.918099 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:59:42.918106 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:59:42.918113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:59:42.918120 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:59:42.918127 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:59:42.918134 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:59:42.918141 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:59:42.918148 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:59:42.918155 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:59:42.918164 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:59:42.918171 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:59:42.918178 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:59:42.918185 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:59:42.918191 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:59:42.918198 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:59:42.918205 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:59:42.918212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:59:42.918221 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:59:42.918228 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:59:42.918235 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:59:42.918242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:59:42.918249 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:59:42.918273 systemd-journald[188]: Collecting audit messages is disabled. Mar 17 17:59:42.918292 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:59:42.918300 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:59:42.918307 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:59:42.918314 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:59:42.918324 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:59:42.918331 kernel: Bridge firewalling registered Mar 17 17:59:42.918338 systemd-journald[188]: Journal started Mar 17 17:59:42.918353 systemd-journald[188]: Runtime Journal (/run/log/journal/e3ed873c65ce467aabf4ae0ca71ad1aa) is 4.8M, max 38.4M, 33.6M free. Mar 17 17:59:42.886181 systemd-modules-load[189]: Inserted module 'overlay' Mar 17 17:59:42.916912 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 17 17:59:42.954651 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:59:42.954709 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:59:42.955309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:42.957517 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:59:42.962755 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:59:42.964445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:59:42.967955 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:59:42.973918 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:59:42.979936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:59:42.982887 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:59:42.984302 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:59:42.987766 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:59:42.998741 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:59:43.001456 dracut-cmdline[222]: dracut-dracut-053 Mar 17 17:59:43.004436 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:59:43.007090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:59:43.033472 systemd-resolved[231]: Positive Trust Anchors: Mar 17 17:59:43.034132 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:59:43.034906 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:59:43.036978 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 17 17:59:43.040000 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:59:43.040571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:59:43.069668 kernel: SCSI subsystem initialized Mar 17 17:59:43.078647 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:59:43.087643 kernel: iscsi: registered transport (tcp) Mar 17 17:59:43.105967 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:59:43.106005 kernel: QLogic iSCSI HBA Driver Mar 17 17:59:43.145768 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:59:43.151768 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:59:43.172889 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:59:43.172919 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:59:43.174032 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:59:43.212664 kernel: raid6: avx2x4 gen() 32991 MB/s Mar 17 17:59:43.229656 kernel: raid6: avx2x2 gen() 30520 MB/s Mar 17 17:59:43.246735 kernel: raid6: avx2x1 gen() 26625 MB/s Mar 17 17:59:43.246764 kernel: raid6: using algorithm avx2x4 gen() 32991 MB/s Mar 17 17:59:43.264838 kernel: raid6: .... xor() 4636 MB/s, rmw enabled Mar 17 17:59:43.264863 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:59:43.283656 kernel: xor: automatically using best checksumming function avx Mar 17 17:59:43.410671 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:59:43.422672 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:59:43.427777 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:59:43.444972 systemd-udevd[407]: Using default interface naming scheme 'v255'. Mar 17 17:59:43.448701 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:59:43.457832 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:59:43.469536 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Mar 17 17:59:43.497498 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:59:43.501830 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:59:43.565517 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:59:43.572787 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:59:43.589156 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:59:43.590783 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:59:43.591239 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:59:43.593316 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:59:43.598764 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:59:43.608384 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:59:43.651680 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:59:43.654641 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 17 17:59:43.660681 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:59:43.701689 kernel: libata version 3.00 loaded. Mar 17 17:59:43.701844 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:59:43.701979 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:59:43.704521 kernel: ACPI: bus type USB registered Mar 17 17:59:43.704552 kernel: usbcore: registered new interface driver usbfs Mar 17 17:59:43.704562 kernel: usbcore: registered new interface driver hub Mar 17 17:59:43.704571 kernel: usbcore: registered new device driver usb Mar 17 17:59:43.708729 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:59:43.709908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:59:43.710003 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:43.710541 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:59:43.720883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:59:43.732919 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:59:43.748554 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:59:43.748571 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:59:43.749325 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:59:43.749473 kernel: scsi host1: ahci Mar 17 17:59:43.749610 kernel: scsi host2: ahci Mar 17 17:59:43.752860 kernel: scsi host3: ahci Mar 17 17:59:43.753006 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:59:43.753017 kernel: scsi host4: ahci Mar 17 17:59:43.753148 kernel: AES CTR mode by8 optimization enabled Mar 17 17:59:43.753158 kernel: scsi host5: ahci Mar 17 17:59:43.753280 kernel: scsi host6: ahci Mar 17 17:59:43.753401 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Mar 17 17:59:43.753415 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Mar 17 17:59:43.753424 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Mar 17 17:59:43.753435 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Mar 17 17:59:43.753444 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Mar 17 17:59:43.753452 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Mar 17 17:59:43.797790 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:43.802750 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:59:43.818284 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:59:44.064846 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 17 17:59:44.064937 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:59:44.064960 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:59:44.064980 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:59:44.064997 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:59:44.066657 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:59:44.069322 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:59:44.069364 kernel: ata1.00: applying bridge limits Mar 17 17:59:44.070842 kernel: ata1.00: configured for UDMA/100 Mar 17 17:59:44.074657 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:59:44.112942 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:59:44.135737 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 17 17:59:44.135883 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 17 17:59:44.148189 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 17 17:59:44.148382 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 17:59:44.148558 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 17:59:44.148743 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 17 17:59:44.148887 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:59:44.149029 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 17:59:44.149166 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 17 17:59:44.149293 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:59:44.149308 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 17 17:59:44.149433 kernel: GPT:17805311 != 80003071 Mar 17 17:59:44.149444 kernel: hub 1-0:1.0: USB hub found Mar 17 17:59:44.149587 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:59:44.149598 kernel: hub 1-0:1.0: 4 ports detected Mar 17 17:59:44.150939 kernel: GPT:17805311 != 80003071 Mar 17 17:59:44.150951 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 17:59:44.151102 kernel: hub 2-0:1.0: USB hub found Mar 17 17:59:44.151257 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:59:44.151270 kernel: hub 2-0:1.0: 4 ports detected Mar 17 17:59:44.152039 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:59:44.152056 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 17:59:44.168598 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:59:44.178767 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:59:44.178781 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:59:44.194650 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (467) Mar 17 17:59:44.197661 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (453) Mar 17 17:59:44.205758 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 17 17:59:44.210520 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 17 17:59:44.215861 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 17 17:59:44.217383 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 17 17:59:44.222240 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:59:44.230821 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:59:44.236318 disk-uuid[576]: Primary Header is updated. Mar 17 17:59:44.236318 disk-uuid[576]: Secondary Entries is updated. Mar 17 17:59:44.236318 disk-uuid[576]: Secondary Header is updated. Mar 17 17:59:44.240744 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:59:44.369651 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 17:59:44.518702 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:59:44.528714 kernel: usbcore: registered new interface driver usbhid Mar 17 17:59:44.528787 kernel: usbhid: USB HID core driver Mar 17 17:59:44.540178 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 17 17:59:44.540226 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 17 17:59:45.250895 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:59:45.251341 disk-uuid[578]: The operation has completed successfully. Mar 17 17:59:45.319303 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:59:45.319477 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:59:45.340764 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:59:45.344645 sh[595]: Success Mar 17 17:59:45.357660 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:59:45.416200 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:59:45.427737 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:59:45.433027 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:59:45.451326 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:59:45.451384 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:59:45.454890 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:59:45.454938 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:59:45.456372 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:59:45.467646 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:59:45.470231 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:59:45.471436 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:59:45.477768 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:59:45.481815 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:59:45.500479 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:59:45.500515 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:59:45.500526 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:59:45.507923 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:59:45.507987 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:59:45.518043 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:59:45.520640 kernel: BTRFS info (device sda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:59:45.526994 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:59:45.532808 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:59:45.594307 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:59:45.608136 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:59:45.623639 ignition[697]: Ignition 2.20.0 Mar 17 17:59:45.623650 ignition[697]: Stage: fetch-offline Mar 17 17:59:45.625337 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:59:45.623683 ignition[697]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:45.623693 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:59:45.623771 ignition[697]: parsed url from cmdline: "" Mar 17 17:59:45.623775 ignition[697]: no config URL provided Mar 17 17:59:45.623780 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:59:45.628921 systemd-networkd[778]: lo: Link UP Mar 17 17:59:45.623788 ignition[697]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:59:45.628925 systemd-networkd[778]: lo: Gained carrier Mar 17 17:59:45.623793 ignition[697]: failed to fetch config: resource requires networking Mar 17 17:59:45.623937 ignition[697]: Ignition finished successfully Mar 17 17:59:45.632563 systemd-networkd[778]: Enumeration completed Mar 17 17:59:45.632678 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:59:45.633783 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:45.633787 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:59:45.634861 systemd[1]: Reached target network.target - Network. Mar 17 17:59:45.635352 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:45.635355 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:59:45.636527 systemd-networkd[778]: eth0: Link UP Mar 17 17:59:45.636532 systemd-networkd[778]: eth0: Gained carrier Mar 17 17:59:45.636545 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:45.641753 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:59:45.641975 systemd-networkd[778]: eth1: Link UP Mar 17 17:59:45.641980 systemd-networkd[778]: eth1: Gained carrier Mar 17 17:59:45.641990 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:45.653526 ignition[784]: Ignition 2.20.0 Mar 17 17:59:45.653537 ignition[784]: Stage: fetch Mar 17 17:59:45.654297 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:45.654309 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:59:45.654397 ignition[784]: parsed url from cmdline: "" Mar 17 17:59:45.654401 ignition[784]: no config URL provided Mar 17 17:59:45.654406 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:59:45.654415 ignition[784]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:59:45.654437 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 17 17:59:45.654584 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 17 17:59:45.677665 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:59:45.697660 systemd-networkd[778]: eth0: DHCPv4 address 157.180.43.77/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:59:45.854780 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 17 17:59:45.862333 ignition[784]: GET result: OK Mar 17 17:59:45.862473 ignition[784]: parsing config with SHA512: b7c9793d05161dc5d722bdbcfec0755ab6f7c74d7335922e46c4a89a0c3d43eec67c28f6c7f83a95eae341ea2f1ed3e149c72f40d218fc341068deae0056b5c2 Mar 17 17:59:45.870558 unknown[784]: fetched base config from "system" Mar 17 17:59:45.870612 unknown[784]: fetched base config from "system" Mar 17 17:59:45.870660 unknown[784]: fetched user config from "hetzner" Mar 17 17:59:45.873524 ignition[784]: fetch: fetch complete Mar 17 17:59:45.873554 ignition[784]: fetch: fetch passed Mar 17 17:59:45.873692 ignition[784]: Ignition finished successfully Mar 17 17:59:45.877131 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:59:45.885805 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:59:45.924389 ignition[792]: Ignition 2.20.0 Mar 17 17:59:45.924414 ignition[792]: Stage: kargs Mar 17 17:59:45.924751 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:45.924774 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:59:45.926303 ignition[792]: kargs: kargs passed Mar 17 17:59:45.928913 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:59:45.926381 ignition[792]: Ignition finished successfully Mar 17 17:59:45.946993 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:59:45.964852 ignition[798]: Ignition 2.20.0 Mar 17 17:59:45.964870 ignition[798]: Stage: disks Mar 17 17:59:45.965080 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:45.969275 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:59:45.965097 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:59:45.971077 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:59:45.966293 ignition[798]: disks: disks passed Mar 17 17:59:45.972553 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:59:45.966350 ignition[798]: Ignition finished successfully Mar 17 17:59:45.974532 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:59:45.976461 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:59:45.978075 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:59:45.987892 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:59:46.013815 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 17:59:46.019175 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:59:46.028192 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:59:46.131125 kernel: EXT4-fs (sda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:59:46.131379 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:59:46.132370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:59:46.142693 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:59:46.146705 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:59:46.147984 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:59:46.150083 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:59:46.150110 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:59:46.160240 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:59:46.162965 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (814) Mar 17 17:59:46.169763 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:59:46.169790 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:59:46.172793 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:59:46.177976 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:59:46.183403 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:59:46.183432 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:59:46.187422 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:59:46.227184 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:59:46.230133 coreos-metadata[816]: Mar 17 17:59:46.230 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 17 17:59:46.231755 coreos-metadata[816]: Mar 17 17:59:46.231 INFO Fetch successful Mar 17 17:59:46.231755 coreos-metadata[816]: Mar 17 17:59:46.231 INFO wrote hostname ci-4152-2-2-2-c2b93240d2 to /sysroot/etc/hostname Mar 17 17:59:46.234637 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:59:46.237336 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:59:46.242359 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:59:46.246727 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:59:46.332609 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:59:46.337707 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:59:46.341469 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:59:46.348640 kernel: BTRFS info (device sda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:59:46.369761 ignition[933]: INFO : Ignition 2.20.0 Mar 17 17:59:46.369761 ignition[933]: INFO : Stage: mount Mar 17 17:59:46.369761 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:46.369761 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:59:46.372162 ignition[933]: INFO : mount: mount passed Mar 17 17:59:46.372162 ignition[933]: INFO : Ignition finished successfully Mar 17 17:59:46.374438 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:59:46.382736 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:59:46.383552 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:59:46.448602 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:59:46.453755 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:59:46.464015 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (945) Mar 17 17:59:46.464048 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:59:46.465658 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:59:46.467673 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:59:46.471899 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:59:46.471922 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:59:46.475348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:59:46.506948 ignition[962]: INFO : Ignition 2.20.0 Mar 17 17:59:46.508657 ignition[962]: INFO : Stage: files Mar 17 17:59:46.508657 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:46.508657 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:59:46.511507 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:59:46.512611 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:59:46.512611 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:59:46.516256 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:59:46.517068 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:59:46.517068 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:59:46.516972 unknown[962]: wrote ssh authorized keys file for user: core Mar 17 17:59:46.520350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:59:46.520350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:59:46.520350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:59:46.520350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:59:46.714749 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:59:47.259768 systemd-networkd[778]: eth1: Gained IPv6LL Mar 17 17:59:47.643815 systemd-networkd[778]: eth0: Gained IPv6LL Mar 17 17:59:47.787360 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:59:47.787360 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:59:47.789916 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 17:59:48.298117 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 17 17:59:48.398451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:59:48.398451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:59:48.401903 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:59:48.988685 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 17 17:59:49.394868 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:59:49.394868 ignition[962]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:59:49.397301 ignition[962]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:59:49.397301 ignition[962]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:59:49.397301 ignition[962]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:59:49.397301 ignition[962]: INFO : files: files passed Mar 17 17:59:49.397301 ignition[962]: INFO : Ignition finished successfully Mar 17 17:59:49.398846 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:59:49.409389 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:59:49.412424 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:59:49.413752 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:59:49.413858 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:59:49.430184 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:59:49.431076 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:59:49.432361 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:59:49.433708 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:59:49.434605 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:59:49.442730 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:59:49.463571 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:59:49.463704 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:59:49.465033 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:59:49.465945 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:59:49.467024 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:59:49.476778 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:59:49.487665 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:59:49.495795 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:59:49.506106 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:59:49.506938 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:59:49.508045 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:59:49.509047 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:59:49.509144 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:59:49.510326 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:59:49.511035 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:59:49.512006 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:59:49.513014 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:59:49.514062 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:59:49.515079 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:59:49.516204 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:59:49.517386 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:59:49.518398 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:59:49.519444 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:59:49.520610 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:59:49.520786 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:59:49.521976 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:59:49.522797 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:59:49.523752 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:59:49.524219 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:59:49.525365 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:59:49.525458 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:59:49.526849 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:59:49.526951 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:59:49.527601 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:59:49.527755 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:59:49.528748 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:59:49.528872 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:59:49.538177 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:59:49.541767 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:59:49.542268 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:59:49.542418 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:59:49.544477 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:59:49.544727 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:59:49.553949 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:59:49.554074 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:59:49.556846 ignition[1015]: INFO : Ignition 2.20.0 Mar 17 17:59:49.556846 ignition[1015]: INFO : Stage: umount Mar 17 17:59:49.556846 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:59:49.556846 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:59:49.560698 ignition[1015]: INFO : umount: umount passed Mar 17 17:59:49.560698 ignition[1015]: INFO : Ignition finished successfully Mar 17 17:59:49.561001 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:59:49.561121 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:59:49.562798 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:59:49.562889 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:59:49.563909 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:59:49.563974 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:59:49.564824 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:59:49.564880 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:59:49.565975 systemd[1]: Stopped target network.target - Network. Mar 17 17:59:49.568435 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:59:49.568499 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:59:49.569511 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:59:49.570545 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:59:49.570769 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:59:49.571594 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:59:49.572681 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:59:49.574436 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:59:49.574485 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:59:49.575607 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:59:49.575678 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:59:49.578953 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:59:49.579017 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:59:49.584294 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:59:49.584352 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:59:49.586099 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:59:49.590196 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:59:49.590224 systemd-networkd[778]: eth1: DHCPv6 lease lost Mar 17 17:59:49.593916 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:59:49.596344 systemd-networkd[778]: eth0: DHCPv6 lease lost Mar 17 17:59:49.598540 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:59:49.598664 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:59:49.607091 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:59:49.607242 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:59:49.611333 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:59:49.611405 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:59:49.620857 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:59:49.621811 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:59:49.622439 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:59:49.623703 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:59:49.623757 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:59:49.627124 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:59:49.627176 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:59:49.628434 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:59:49.628478 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:59:49.629803 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:59:49.633261 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:59:49.633356 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:59:49.639365 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:59:49.639460 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:59:49.641692 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:59:49.641857 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:59:49.643046 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:59:49.643151 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:59:49.644546 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:59:49.644607 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:59:49.645703 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:59:49.645740 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:59:49.646708 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:59:49.646755 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:59:49.648188 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:59:49.648232 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:59:49.649241 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:59:49.649284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:59:49.658768 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:59:49.659884 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:59:49.659935 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:59:49.661739 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:59:49.661785 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:59:49.662335 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:59:49.662377 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:59:49.662881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:59:49.662923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:49.664899 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:59:49.664988 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:59:49.666329 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:59:49.673775 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:59:49.681450 systemd[1]: Switching root. Mar 17 17:59:49.710307 systemd-journald[188]: Journal stopped Mar 17 17:59:50.713906 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 17 17:59:50.713972 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:59:50.713985 kernel: SELinux: policy capability open_perms=1 Mar 17 17:59:50.714004 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:59:50.714013 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:59:50.714022 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:59:50.714036 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:59:50.714046 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:59:50.714058 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:59:50.714072 kernel: audit: type=1403 audit(1742234389.883:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:59:50.714082 systemd[1]: Successfully loaded SELinux policy in 42.360ms. Mar 17 17:59:50.714100 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.860ms. Mar 17 17:59:50.714111 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:59:50.714122 systemd[1]: Detected virtualization kvm. Mar 17 17:59:50.714135 systemd[1]: Detected architecture x86-64. Mar 17 17:59:50.714145 systemd[1]: Detected first boot. Mar 17 17:59:50.714154 systemd[1]: Hostname set to . Mar 17 17:59:50.714164 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:59:50.714175 zram_generator::config[1079]: No configuration found. Mar 17 17:59:50.714186 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:59:50.714196 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:59:50.714205 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:59:50.714218 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:59:50.714228 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:59:50.714238 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:59:50.714248 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:59:50.714259 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:59:50.714269 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:59:50.714279 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:59:50.714290 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:59:50.714299 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:59:50.714312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:59:50.714322 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:59:50.714338 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:59:50.714348 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:59:50.714358 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:59:50.714367 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:59:50.714378 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:59:50.714388 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:59:50.714398 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:59:50.714411 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:59:50.714421 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:59:50.714431 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:59:50.714441 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:59:50.714452 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:59:50.714468 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:59:50.714480 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:59:50.714490 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:59:50.714501 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:59:50.714527 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:59:50.714540 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:59:50.714550 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:59:50.714560 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:59:50.714577 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:59:50.714588 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:50.714598 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:59:50.714608 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:59:50.722308 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:59:50.722343 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:59:50.722355 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:50.722367 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:59:50.722384 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:59:50.722394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:59:50.722404 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:59:50.722419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:59:50.722430 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:59:50.722440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:59:50.722451 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:59:50.722461 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 17:59:50.722475 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 17 17:59:50.722485 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:59:50.722495 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:59:50.722505 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:59:50.722527 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:59:50.722537 kernel: fuse: init (API version 7.39) Mar 17 17:59:50.722578 systemd-journald[1173]: Collecting audit messages is disabled. Mar 17 17:59:50.722599 kernel: loop: module loaded Mar 17 17:59:50.722613 kernel: ACPI: bus type drm_connector registered Mar 17 17:59:50.722646 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:59:50.722659 systemd-journald[1173]: Journal started Mar 17 17:59:50.722679 systemd-journald[1173]: Runtime Journal (/run/log/journal/e3ed873c65ce467aabf4ae0ca71ad1aa) is 4.8M, max 38.4M, 33.6M free. Mar 17 17:59:50.725646 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:50.728791 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:59:50.730232 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:59:50.731060 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:59:50.731694 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:59:50.732272 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:59:50.733003 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:59:50.733741 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:59:50.734546 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:59:50.735451 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:59:50.736339 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:59:50.736543 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:59:50.737565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:59:50.737850 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:59:50.738674 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:59:50.738921 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:59:50.740021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:59:50.740255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:59:50.741087 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:59:50.741318 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:59:50.742268 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:59:50.742541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:59:50.743594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:59:50.744453 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:59:50.745503 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:59:50.761419 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:59:50.767717 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:59:50.770706 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:59:50.772840 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:59:50.783806 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:59:50.788877 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:59:50.790365 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:59:50.799743 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:59:50.801849 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:59:50.807102 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:59:50.818740 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:59:50.827017 systemd-journald[1173]: Time spent on flushing to /var/log/journal/e3ed873c65ce467aabf4ae0ca71ad1aa is 40.902ms for 1126 entries. Mar 17 17:59:50.827017 systemd-journald[1173]: System Journal (/var/log/journal/e3ed873c65ce467aabf4ae0ca71ad1aa) is 8.0M, max 584.8M, 576.8M free. Mar 17 17:59:50.889749 systemd-journald[1173]: Received client request to flush runtime journal. Mar 17 17:59:50.825929 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:59:50.829960 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:59:50.830837 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:59:50.834575 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:59:50.864451 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:59:50.875799 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:59:50.884669 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Mar 17 17:59:50.884680 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Mar 17 17:59:50.885745 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:59:50.894070 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:59:50.897540 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:59:50.908915 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:59:50.911557 udevadm[1229]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:59:50.936740 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:59:50.948774 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:59:50.962568 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Mar 17 17:59:50.962888 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Mar 17 17:59:50.967868 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:59:51.308104 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:59:51.320014 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:59:51.342293 systemd-udevd[1248]: Using default interface naming scheme 'v255'. Mar 17 17:59:51.380476 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:59:51.389736 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:59:51.406739 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:59:51.459959 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:59:51.468098 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 17 17:59:51.542538 systemd-networkd[1251]: lo: Link UP Mar 17 17:59:51.542553 systemd-networkd[1251]: lo: Gained carrier Mar 17 17:59:51.546991 systemd-networkd[1251]: Enumeration completed Mar 17 17:59:51.547774 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:51.547786 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:59:51.547918 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:59:51.549913 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:51.549923 systemd-networkd[1251]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:59:51.550392 systemd-networkd[1251]: eth0: Link UP Mar 17 17:59:51.550396 systemd-networkd[1251]: eth0: Gained carrier Mar 17 17:59:51.550407 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:51.554955 systemd-networkd[1251]: eth1: Link UP Mar 17 17:59:51.555049 systemd-networkd[1251]: eth1: Gained carrier Mar 17 17:59:51.555100 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:51.557688 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 17:59:51.559774 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:59:51.571641 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:59:51.579647 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:59:51.580710 systemd-networkd[1251]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:59:51.593271 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:51.594285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:51.597770 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:51.601767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:59:51.609641 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1252) Mar 17 17:59:51.607000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:59:51.611865 systemd-networkd[1251]: eth0: DHCPv4 address 157.180.43.77/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:59:51.612802 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:59:51.613421 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:59:51.613460 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:59:51.613516 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:51.614075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:59:51.614284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:59:51.619468 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:59:51.619837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:59:51.624609 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:59:51.635762 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:59:51.642278 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:59:51.644811 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:59:51.649000 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:59:51.651920 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:59:51.652376 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:59:51.651277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:59:51.655642 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 17:59:51.683643 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Mar 17 17:59:51.694400 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:59:51.694453 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Mar 17 17:59:51.718640 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:59:51.722182 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:59:51.722221 kernel: [drm] features: -context_init Mar 17 17:59:51.724660 kernel: [drm] number of scanouts: 1 Mar 17 17:59:51.726119 kernel: [drm] number of cap sets: 0 Mar 17 17:59:51.731643 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 17 17:59:51.739087 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 17 17:59:51.739687 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:59:51.742037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:59:51.748650 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:59:51.756072 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:59:51.768535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:59:51.768863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:51.777751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:59:51.841284 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:59:51.867206 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:59:51.877898 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:59:51.897168 lvm[1315]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:59:51.931220 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:59:51.934285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:59:51.942850 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:59:51.952137 lvm[1318]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:59:51.995585 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:59:52.000002 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:59:52.000201 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:59:52.000242 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:59:52.000381 systemd[1]: Reached target machines.target - Containers. Mar 17 17:59:52.002897 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:59:52.011886 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:59:52.017536 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:59:52.020244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:52.028906 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:59:52.040702 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:59:52.051854 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:59:52.054918 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:59:52.069931 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:59:52.077035 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 17:59:52.088768 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:59:52.089704 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:59:52.116661 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:59:52.144674 kernel: loop1: detected capacity change from 0 to 140992 Mar 17 17:59:52.186684 kernel: loop2: detected capacity change from 0 to 138184 Mar 17 17:59:52.233652 kernel: loop3: detected capacity change from 0 to 8 Mar 17 17:59:52.255806 kernel: loop4: detected capacity change from 0 to 210664 Mar 17 17:59:52.281713 kernel: loop5: detected capacity change from 0 to 140992 Mar 17 17:59:52.304449 kernel: loop6: detected capacity change from 0 to 138184 Mar 17 17:59:52.329648 kernel: loop7: detected capacity change from 0 to 8 Mar 17 17:59:52.332099 (sd-merge)[1339]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 17 17:59:52.332868 (sd-merge)[1339]: Merged extensions into '/usr'. Mar 17 17:59:52.337862 systemd[1]: Reloading requested from client PID 1326 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:59:52.337997 systemd[1]: Reloading... Mar 17 17:59:52.418196 zram_generator::config[1367]: No configuration found. Mar 17 17:59:52.496652 ldconfig[1322]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:59:52.539532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:59:52.608714 systemd[1]: Reloading finished in 270 ms. Mar 17 17:59:52.625172 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:59:52.629018 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:59:52.642788 systemd[1]: Starting ensure-sysext.service... Mar 17 17:59:52.647741 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:59:52.658321 systemd[1]: Reloading requested from client PID 1417 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:59:52.658348 systemd[1]: Reloading... Mar 17 17:59:52.666688 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:59:52.667270 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:59:52.668156 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:59:52.668449 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Mar 17 17:59:52.668592 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Mar 17 17:59:52.672270 systemd-tmpfiles[1418]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:59:52.672338 systemd-tmpfiles[1418]: Skipping /boot Mar 17 17:59:52.682001 systemd-tmpfiles[1418]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:59:52.682071 systemd-tmpfiles[1418]: Skipping /boot Mar 17 17:59:52.739656 zram_generator::config[1457]: No configuration found. Mar 17 17:59:52.827461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:59:52.883510 systemd[1]: Reloading finished in 224 ms. Mar 17 17:59:52.898504 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:59:52.930989 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:59:52.950069 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:59:52.957919 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:59:52.969870 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:59:52.977930 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:59:52.989069 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:52.989317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:52.997536 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:59:53.009270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:59:53.022804 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:59:53.023387 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:53.023485 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:53.028090 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:59:53.028883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:59:53.037800 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:53.037995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:53.051745 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:59:53.052587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:53.053680 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:53.054583 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:59:53.058153 augenrules[1533]: No rules Mar 17 17:59:53.062257 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:59:53.062562 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:59:53.067025 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:59:53.070169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:59:53.070376 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:59:53.076945 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:59:53.077167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:59:53.080577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:59:53.081233 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:59:53.096416 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:53.098510 systemd-resolved[1508]: Positive Trust Anchors: Mar 17 17:59:53.098533 systemd-resolved[1508]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:59:53.098560 systemd-resolved[1508]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:59:53.103852 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:59:53.109494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:59:53.110202 systemd-resolved[1508]: Using system hostname 'ci-4152-2-2-2-c2b93240d2'. Mar 17 17:59:53.113827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:59:53.119872 augenrules[1547]: /sbin/augenrules: No change Mar 17 17:59:53.124699 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:59:53.134984 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:59:53.139229 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:59:53.141580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:59:53.144819 augenrules[1569]: No rules Mar 17 17:59:53.149370 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:59:53.149951 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:59:53.152925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:59:53.154506 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:59:53.154790 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:59:53.155547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:59:53.158225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:59:53.162995 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:59:53.163189 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:59:53.164008 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:59:53.164188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:59:53.166244 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:59:53.166434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:59:53.172151 systemd[1]: Finished ensure-sysext.service. Mar 17 17:59:53.181711 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:59:53.187551 systemd[1]: Reached target network.target - Network. Mar 17 17:59:53.189184 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:59:53.189780 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:59:53.189850 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:59:53.196751 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:59:53.200109 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:59:53.202059 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:59:53.257068 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:59:53.258245 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:59:53.259064 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:59:53.261855 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:59:53.262661 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:59:53.263220 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:59:53.263247 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:59:53.264356 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:59:53.265175 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:59:53.266043 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:59:53.266975 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:59:53.273978 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:59:53.277867 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:59:53.284786 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:59:53.285684 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:59:53.286203 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:59:53.287772 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:59:53.289229 systemd[1]: System is tainted: cgroupsv1 Mar 17 17:59:53.289270 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:59:53.289293 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:59:53.296734 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:59:53.304787 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:59:53.309107 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:59:53.323362 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:59:53.338812 jq[1599]: false Mar 17 17:59:53.339844 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:59:53.340552 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:59:53.351494 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:59:53.356561 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:59:53.360891 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 17 17:59:53.367110 coreos-metadata[1597]: Mar 17 17:59:53.367 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 17 17:59:53.374613 coreos-metadata[1597]: Mar 17 17:59:53.374 INFO Fetch successful Mar 17 17:59:53.374613 coreos-metadata[1597]: Mar 17 17:59:53.374 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 17 17:59:53.374613 coreos-metadata[1597]: Mar 17 17:59:53.374 INFO Fetch successful Mar 17 17:59:53.374457 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:59:53.381167 extend-filesystems[1602]: Found loop4 Mar 17 17:59:53.381167 extend-filesystems[1602]: Found loop5 Mar 17 17:59:53.385553 extend-filesystems[1602]: Found loop6 Mar 17 17:59:53.385553 extend-filesystems[1602]: Found loop7 Mar 17 17:59:53.385553 extend-filesystems[1602]: Found sda Mar 17 17:59:53.385553 extend-filesystems[1602]: Found sda1 Mar 17 17:59:53.385553 extend-filesystems[1602]: Found sda2 Mar 17 17:59:53.385553 extend-filesystems[1602]: Found sda3 Mar 17 17:59:53.385553 extend-filesystems[1602]: Found usr Mar 17 17:59:53.385553 extend-filesystems[1602]: Found sda4 Mar 17 17:59:53.385553 extend-filesystems[1602]: Found sda6 Mar 17 17:59:53.385553 extend-filesystems[1602]: Found sda7 Mar 17 17:59:53.385553 extend-filesystems[1602]: Found sda9 Mar 17 17:59:53.385553 extend-filesystems[1602]: Checking size of /dev/sda9 Mar 17 17:59:53.391136 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:59:53.403838 systemd-networkd[1251]: eth0: Gained IPv6LL Mar 17 17:59:53.420182 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:59:53.422857 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:59:53.427997 dbus-daemon[1598]: [system] SELinux support is enabled Mar 17 17:59:53.432367 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:59:54.527504 systemd-timesyncd[1590]: Contacted time server 217.160.19.219:123 (0.flatcar.pool.ntp.org). Mar 17 17:59:54.527560 systemd-timesyncd[1590]: Initial clock synchronization to Mon 2025-03-17 17:59:54.527394 UTC. Mar 17 17:59:54.532809 extend-filesystems[1602]: Resized partition /dev/sda9 Mar 17 17:59:54.528372 systemd-resolved[1508]: Clock change detected. Flushing caches. Mar 17 17:59:54.532666 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:59:54.540329 extend-filesystems[1633]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:59:54.538764 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:59:54.545816 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:59:54.548940 jq[1631]: true Mar 17 17:59:54.558335 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 17 17:59:54.558981 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:59:54.559311 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:59:54.559643 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:59:54.559899 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:59:54.574704 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:59:54.577799 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:59:54.596806 update_engine[1625]: I20250317 17:59:54.596725 1625 main.cc:92] Flatcar Update Engine starting Mar 17 17:59:54.607506 update_engine[1625]: I20250317 17:59:54.599699 1625 update_check_scheduler.cc:74] Next update check in 7m49s Mar 17 17:59:54.629751 jq[1638]: true Mar 17 17:59:54.629830 systemd-logind[1624]: New seat seat0. Mar 17 17:59:54.635311 (ntainerd)[1649]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:59:54.637858 systemd-logind[1624]: Watching system buttons on /dev/input/event2 (Power Button) Mar 17 17:59:54.637876 systemd-logind[1624]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:59:54.639954 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:59:54.653335 tar[1637]: linux-amd64/helm Mar 17 17:59:54.657424 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:59:54.668434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:54.665092 dbus-daemon[1598]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:59:54.678430 systemd-networkd[1251]: eth1: Gained IPv6LL Mar 17 17:59:54.679856 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:59:54.684349 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:59:54.684381 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:59:54.684845 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:59:54.684863 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:59:54.696360 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1253) Mar 17 17:59:54.708623 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:59:54.712171 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 17 17:59:54.720014 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:59:54.722029 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:59:54.727336 extend-filesystems[1633]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 17:59:54.727336 extend-filesystems[1633]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 17 17:59:54.727336 extend-filesystems[1633]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 17 17:59:54.729743 extend-filesystems[1602]: Resized filesystem in /dev/sda9 Mar 17 17:59:54.729743 extend-filesystems[1602]: Found sr0 Mar 17 17:59:54.730045 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:59:54.732550 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:59:54.786732 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:59:54.790126 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:59:54.819536 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:59:54.864984 bash[1694]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:59:54.867950 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:59:54.881017 sshd_keygen[1630]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:59:54.891861 systemd[1]: Starting sshkeys.service... Mar 17 17:59:54.934834 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:59:54.949720 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:59:54.955588 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:59:54.972581 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:59:54.990455 coreos-metadata[1716]: Mar 17 17:59:54.990 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 17 17:59:54.991048 coreos-metadata[1716]: Mar 17 17:59:54.991 INFO Fetch successful Mar 17 17:59:54.993635 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:59:54.993900 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:59:55.000599 unknown[1716]: wrote ssh authorized keys file for user: core Mar 17 17:59:55.006592 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:59:55.016495 locksmithd[1671]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:59:55.034593 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:59:55.044608 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:59:55.058648 update-ssh-keys[1729]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:59:55.060427 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:59:55.061055 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:59:55.062097 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:59:55.073778 systemd[1]: Finished sshkeys.service. Mar 17 17:59:55.095626 containerd[1649]: time="2025-03-17T17:59:55.095554121Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:59:55.126878 containerd[1649]: time="2025-03-17T17:59:55.126845095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131711 containerd[1649]: time="2025-03-17T17:59:55.130437050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131711 containerd[1649]: time="2025-03-17T17:59:55.130460153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:59:55.131711 containerd[1649]: time="2025-03-17T17:59:55.130474741Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:59:55.131711 containerd[1649]: time="2025-03-17T17:59:55.130621836Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:59:55.131711 containerd[1649]: time="2025-03-17T17:59:55.130636063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131711 containerd[1649]: time="2025-03-17T17:59:55.130697038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131711 containerd[1649]: time="2025-03-17T17:59:55.130707888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131711 containerd[1649]: time="2025-03-17T17:59:55.130914094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131711 containerd[1649]: time="2025-03-17T17:59:55.130927029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131711 containerd[1649]: time="2025-03-17T17:59:55.130937187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131711 containerd[1649]: time="2025-03-17T17:59:55.130944742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131911 containerd[1649]: time="2025-03-17T17:59:55.131024532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131911 containerd[1649]: time="2025-03-17T17:59:55.131405316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131911 containerd[1649]: time="2025-03-17T17:59:55.131538746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:59:55.131911 containerd[1649]: time="2025-03-17T17:59:55.131549757Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:59:55.131911 containerd[1649]: time="2025-03-17T17:59:55.131632932Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:59:55.131911 containerd[1649]: time="2025-03-17T17:59:55.131681163Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:59:55.139140 containerd[1649]: time="2025-03-17T17:59:55.139122616Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:59:55.140733 containerd[1649]: time="2025-03-17T17:59:55.140716415Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:59:55.140832 containerd[1649]: time="2025-03-17T17:59:55.140819368Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:59:55.140902 containerd[1649]: time="2025-03-17T17:59:55.140872527Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:59:55.140950 containerd[1649]: time="2025-03-17T17:59:55.140939533Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:59:55.141199 containerd[1649]: time="2025-03-17T17:59:55.141166219Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:59:55.142002 containerd[1649]: time="2025-03-17T17:59:55.141904122Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:59:55.143217 containerd[1649]: time="2025-03-17T17:59:55.143199601Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:59:55.143274 containerd[1649]: time="2025-03-17T17:59:55.143261096Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:59:55.143337 containerd[1649]: time="2025-03-17T17:59:55.143310038Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:59:55.143385 containerd[1649]: time="2025-03-17T17:59:55.143374529Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:59:55.144427 containerd[1649]: time="2025-03-17T17:59:55.144411764Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:59:55.144480 containerd[1649]: time="2025-03-17T17:59:55.144468981Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:59:55.144523 containerd[1649]: time="2025-03-17T17:59:55.144513364Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:59:55.144576 containerd[1649]: time="2025-03-17T17:59:55.144565662Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:59:55.144626 containerd[1649]: time="2025-03-17T17:59:55.144616057Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:59:55.144684 containerd[1649]: time="2025-03-17T17:59:55.144673475Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:59:55.144731 containerd[1649]: time="2025-03-17T17:59:55.144720994Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:59:55.144791 containerd[1649]: time="2025-03-17T17:59:55.144779554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145759611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145776893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145788105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145798564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145808804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145817880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145828140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145838540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145850041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145859789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145868535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145877863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145888994Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145904603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.145964 containerd[1649]: time="2025-03-17T17:59:55.145915073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.146220 containerd[1649]: time="2025-03-17T17:59:55.145923007Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:59:55.146356 containerd[1649]: time="2025-03-17T17:59:55.146339468Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:59:55.146426 containerd[1649]: time="2025-03-17T17:59:55.146410502Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:59:55.146486 containerd[1649]: time="2025-03-17T17:59:55.146458361Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:59:55.146533 containerd[1649]: time="2025-03-17T17:59:55.146521009Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:59:55.146600 containerd[1649]: time="2025-03-17T17:59:55.146587924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.146663 containerd[1649]: time="2025-03-17T17:59:55.146641114Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:59:55.146723 containerd[1649]: time="2025-03-17T17:59:55.146711075Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:59:55.146839 containerd[1649]: time="2025-03-17T17:59:55.146825039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:59:55.148148 containerd[1649]: time="2025-03-17T17:59:55.148107534Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:59:55.148720 containerd[1649]: time="2025-03-17T17:59:55.148335772Z" level=info msg="Connect containerd service" Mar 17 17:59:55.148720 containerd[1649]: time="2025-03-17T17:59:55.148364035Z" level=info msg="using legacy CRI server" Mar 17 17:59:55.148720 containerd[1649]: time="2025-03-17T17:59:55.148371839Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:59:55.148720 containerd[1649]: time="2025-03-17T17:59:55.148487135Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:59:55.150099 containerd[1649]: time="2025-03-17T17:59:55.150080894Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:59:55.150276 containerd[1649]: time="2025-03-17T17:59:55.150250482Z" level=info msg="Start subscribing containerd event" Mar 17 17:59:55.150472 containerd[1649]: time="2025-03-17T17:59:55.150459174Z" level=info msg="Start recovering state" Mar 17 17:59:55.151338 containerd[1649]: time="2025-03-17T17:59:55.150551847Z" level=info msg="Start event monitor" Mar 17 17:59:55.151338 containerd[1649]: time="2025-03-17T17:59:55.150578287Z" level=info msg="Start snapshots syncer" Mar 17 17:59:55.151338 containerd[1649]: time="2025-03-17T17:59:55.150586212Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:59:55.151338 containerd[1649]: time="2025-03-17T17:59:55.150592082Z" level=info msg="Start streaming server" Mar 17 17:59:55.151338 containerd[1649]: time="2025-03-17T17:59:55.150788982Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:59:55.151338 containerd[1649]: time="2025-03-17T17:59:55.150835680Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:59:55.151338 containerd[1649]: time="2025-03-17T17:59:55.150885783Z" level=info msg="containerd successfully booted in 0.057035s" Mar 17 17:59:55.151013 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:59:55.365999 tar[1637]: linux-amd64/LICENSE Mar 17 17:59:55.366383 tar[1637]: linux-amd64/README.md Mar 17 17:59:55.378549 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:59:55.819832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:55.826002 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:59:55.830644 systemd[1]: Startup finished in 8.396s (kernel) + 4.906s (userspace) = 13.303s. Mar 17 17:59:55.836056 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:59:56.459123 kubelet[1758]: E0317 17:59:56.459042 1758 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:59:56.461525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:59:56.461777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:00:06.712790 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:00:06.722568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:06.885456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:06.889122 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:00:06.928629 kubelet[1783]: E0317 18:00:06.928579 1783 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:00:06.934802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:00:06.935067 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:00:17.185856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:00:17.193597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:17.365417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:17.367356 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:00:17.405503 kubelet[1804]: E0317 18:00:17.405463 1804 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:00:17.409189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:00:17.409433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:00:27.437574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:00:27.447488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:27.632468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:27.646168 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:00:27.699006 kubelet[1825]: E0317 18:00:27.698898 1825 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:00:27.702369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:00:27.702658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:00:37.937986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 18:00:37.945586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:38.107636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:38.111151 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:00:38.148735 kubelet[1846]: E0317 18:00:38.148686 1846 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:00:38.151968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:00:38.152196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:00:39.571437 update_engine[1625]: I20250317 18:00:39.571310 1625 update_attempter.cc:509] Updating boot flags... Mar 17 18:00:39.625413 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1864) Mar 17 18:00:39.674199 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1863) Mar 17 18:00:39.719473 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1863) Mar 17 18:00:48.187827 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 18:00:48.199942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:48.361493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:48.365075 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:00:48.405611 kubelet[1888]: E0317 18:00:48.405567 1888 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:00:48.409070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:00:48.409305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:00:58.437685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 18:00:58.448568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:00:58.621517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:00:58.621718 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:00:58.657265 kubelet[1909]: E0317 18:00:58.657199 1909 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:00:58.660428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:00:58.660722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:01:08.688028 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 18:01:08.701616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:01:08.863457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:01:08.866638 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:01:08.899680 kubelet[1930]: E0317 18:01:08.899630 1930 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:01:08.904283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:01:08.904633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:01:18.937573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 18:01:18.950546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:01:19.085137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:01:19.089444 (kubelet)[1951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:01:19.123345 kubelet[1951]: E0317 18:01:19.123242 1951 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:01:19.127021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:01:19.127232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:01:29.187989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 17 18:01:29.195576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:01:29.380466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:01:29.384555 (kubelet)[1972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:01:29.424525 kubelet[1972]: E0317 18:01:29.424455 1972 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:01:29.428045 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:01:29.428359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:01:39.437699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 17 18:01:39.443768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:01:39.577496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:01:39.579460 (kubelet)[1992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:01:39.616260 kubelet[1992]: E0317 18:01:39.616196 1992 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:01:39.619956 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:01:39.620285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:01:49.687892 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 17 18:01:49.695933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:01:49.860497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:01:49.864644 (kubelet)[2013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:01:49.901604 kubelet[2013]: E0317 18:01:49.901541 2013 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:01:49.905029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:01:49.905366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:01:52.904368 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 18:01:52.918435 systemd[1]: Started sshd@0-157.180.43.77:22-139.178.68.195:59170.service - OpenSSH per-connection server daemon (139.178.68.195:59170). Mar 17 18:01:53.921556 sshd[2022]: Accepted publickey for core from 139.178.68.195 port 59170 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:01:53.924006 sshd-session[2022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:53.931762 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 18:01:53.937521 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 18:01:53.939610 systemd-logind[1624]: New session 1 of user core. Mar 17 18:01:53.955561 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 18:01:53.975595 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 18:01:53.979144 (systemd)[2028]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:01:54.075802 systemd[2028]: Queued start job for default target default.target. Mar 17 18:01:54.076173 systemd[2028]: Created slice app.slice - User Application Slice. Mar 17 18:01:54.076191 systemd[2028]: Reached target paths.target - Paths. Mar 17 18:01:54.076203 systemd[2028]: Reached target timers.target - Timers. Mar 17 18:01:54.081419 systemd[2028]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 18:01:54.092502 systemd[2028]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 18:01:54.092580 systemd[2028]: Reached target sockets.target - Sockets. Mar 17 18:01:54.092600 systemd[2028]: Reached target basic.target - Basic System. Mar 17 18:01:54.092657 systemd[2028]: Reached target default.target - Main User Target. Mar 17 18:01:54.092699 systemd[2028]: Startup finished in 106ms. Mar 17 18:01:54.093424 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 18:01:54.105576 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 18:01:54.804709 systemd[1]: Started sshd@1-157.180.43.77:22-139.178.68.195:59172.service - OpenSSH per-connection server daemon (139.178.68.195:59172). Mar 17 18:01:55.831139 sshd[2040]: Accepted publickey for core from 139.178.68.195 port 59172 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:01:55.833421 sshd-session[2040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:55.840571 systemd-logind[1624]: New session 2 of user core. Mar 17 18:01:55.849693 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 18:01:56.524992 sshd[2043]: Connection closed by 139.178.68.195 port 59172 Mar 17 18:01:56.525895 sshd-session[2040]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:56.531016 systemd-logind[1624]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:01:56.532476 systemd[1]: sshd@1-157.180.43.77:22-139.178.68.195:59172.service: Deactivated successfully. Mar 17 18:01:56.535781 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:01:56.536842 systemd-logind[1624]: Removed session 2. Mar 17 18:01:56.693548 systemd[1]: Started sshd@2-157.180.43.77:22-139.178.68.195:58634.service - OpenSSH per-connection server daemon (139.178.68.195:58634). Mar 17 18:01:57.672543 sshd[2048]: Accepted publickey for core from 139.178.68.195 port 58634 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:01:57.675592 sshd-session[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:57.683508 systemd-logind[1624]: New session 3 of user core. Mar 17 18:01:57.691844 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 18:01:58.344805 sshd[2051]: Connection closed by 139.178.68.195 port 58634 Mar 17 18:01:58.345458 sshd-session[2048]: pam_unix(sshd:session): session closed for user core Mar 17 18:01:58.349716 systemd-logind[1624]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:01:58.351780 systemd[1]: sshd@2-157.180.43.77:22-139.178.68.195:58634.service: Deactivated successfully. Mar 17 18:01:58.354973 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:01:58.355915 systemd-logind[1624]: Removed session 3. Mar 17 18:01:58.512720 systemd[1]: Started sshd@3-157.180.43.77:22-139.178.68.195:58638.service - OpenSSH per-connection server daemon (139.178.68.195:58638). Mar 17 18:01:59.522268 sshd[2056]: Accepted publickey for core from 139.178.68.195 port 58638 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:01:59.523760 sshd-session[2056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:01:59.528237 systemd-logind[1624]: New session 4 of user core. Mar 17 18:01:59.537609 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 18:01:59.937442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 17 18:01:59.943597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:02:00.081462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:02:00.084276 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:02:00.117370 kubelet[2073]: E0317 18:02:00.117313 2073 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:02:00.120302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:02:00.120592 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:02:00.203359 sshd[2059]: Connection closed by 139.178.68.195 port 58638 Mar 17 18:02:00.204452 sshd-session[2056]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:00.209817 systemd[1]: sshd@3-157.180.43.77:22-139.178.68.195:58638.service: Deactivated successfully. Mar 17 18:02:00.210040 systemd-logind[1624]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:02:00.211995 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:02:00.213492 systemd-logind[1624]: Removed session 4. Mar 17 18:02:00.367674 systemd[1]: Started sshd@4-157.180.43.77:22-139.178.68.195:58642.service - OpenSSH per-connection server daemon (139.178.68.195:58642). Mar 17 18:02:01.359738 sshd[2085]: Accepted publickey for core from 139.178.68.195 port 58642 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:01.361483 sshd-session[2085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:01.366436 systemd-logind[1624]: New session 5 of user core. Mar 17 18:02:01.383644 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 18:02:01.885783 sudo[2089]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 18:02:01.886171 sudo[2089]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:02:01.902474 sudo[2089]: pam_unix(sudo:session): session closed for user root Mar 17 18:02:02.059596 sshd[2088]: Connection closed by 139.178.68.195 port 58642 Mar 17 18:02:02.060462 sshd-session[2085]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:02.063940 systemd[1]: sshd@4-157.180.43.77:22-139.178.68.195:58642.service: Deactivated successfully. Mar 17 18:02:02.068078 systemd-logind[1624]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:02:02.068105 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:02:02.070422 systemd-logind[1624]: Removed session 5. Mar 17 18:02:02.224767 systemd[1]: Started sshd@5-157.180.43.77:22-139.178.68.195:58652.service - OpenSSH per-connection server daemon (139.178.68.195:58652). Mar 17 18:02:03.199729 sshd[2094]: Accepted publickey for core from 139.178.68.195 port 58652 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:03.202132 sshd-session[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:03.208520 systemd-logind[1624]: New session 6 of user core. Mar 17 18:02:03.219937 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 18:02:03.723364 sudo[2099]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 18:02:03.724024 sudo[2099]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:02:03.729540 sudo[2099]: pam_unix(sudo:session): session closed for user root Mar 17 18:02:03.737062 sudo[2098]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 18:02:03.737523 sudo[2098]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:02:03.759665 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 18:02:03.805178 augenrules[2121]: No rules Mar 17 18:02:03.806276 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:02:03.806746 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 18:02:03.810592 sudo[2098]: pam_unix(sudo:session): session closed for user root Mar 17 18:02:03.968256 sshd[2097]: Connection closed by 139.178.68.195 port 58652 Mar 17 18:02:03.969237 sshd-session[2094]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:03.973288 systemd[1]: sshd@5-157.180.43.77:22-139.178.68.195:58652.service: Deactivated successfully. Mar 17 18:02:03.979454 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:02:03.982256 systemd-logind[1624]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:02:03.983974 systemd-logind[1624]: Removed session 6. Mar 17 18:02:04.132927 systemd[1]: Started sshd@6-157.180.43.77:22-139.178.68.195:58668.service - OpenSSH per-connection server daemon (139.178.68.195:58668). Mar 17 18:02:05.118461 sshd[2130]: Accepted publickey for core from 139.178.68.195 port 58668 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:05.120034 sshd-session[2130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:05.124720 systemd-logind[1624]: New session 7 of user core. Mar 17 18:02:05.134597 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 18:02:05.635693 sudo[2134]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:02:05.636080 sudo[2134]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:02:05.878499 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 18:02:05.879353 (dockerd)[2153]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 18:02:06.102639 dockerd[2153]: time="2025-03-17T18:02:06.102591857Z" level=info msg="Starting up" Mar 17 18:02:06.190644 dockerd[2153]: time="2025-03-17T18:02:06.190401124Z" level=info msg="Loading containers: start." Mar 17 18:02:06.341344 kernel: Initializing XFRM netlink socket Mar 17 18:02:06.415783 systemd-networkd[1251]: docker0: Link UP Mar 17 18:02:06.442539 dockerd[2153]: time="2025-03-17T18:02:06.442496493Z" level=info msg="Loading containers: done." Mar 17 18:02:06.457556 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck873450568-merged.mount: Deactivated successfully. Mar 17 18:02:06.458891 dockerd[2153]: time="2025-03-17T18:02:06.458852811Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:02:06.458956 dockerd[2153]: time="2025-03-17T18:02:06.458936920Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 18:02:06.459101 dockerd[2153]: time="2025-03-17T18:02:06.459041689Z" level=info msg="Daemon has completed initialization" Mar 17 18:02:06.489523 dockerd[2153]: time="2025-03-17T18:02:06.489485366Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:02:06.489595 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 18:02:07.552581 containerd[1649]: time="2025-03-17T18:02:07.552531527Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:02:08.091375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597939108.mount: Deactivated successfully. Mar 17 18:02:09.147251 containerd[1649]: time="2025-03-17T18:02:09.146787615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:09.147984 containerd[1649]: time="2025-03-17T18:02:09.147787549Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674667" Mar 17 18:02:09.148790 containerd[1649]: time="2025-03-17T18:02:09.148735744Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:09.151181 containerd[1649]: time="2025-03-17T18:02:09.151125991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:09.152269 containerd[1649]: time="2025-03-17T18:02:09.152060962Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 1.599491944s" Mar 17 18:02:09.152269 containerd[1649]: time="2025-03-17T18:02:09.152092983Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 18:02:09.172466 containerd[1649]: time="2025-03-17T18:02:09.172441501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:02:10.187946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Mar 17 18:02:10.195494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:02:10.378588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:02:10.384180 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:02:10.422896 kubelet[2419]: E0317 18:02:10.422505 2419 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:02:10.428449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:02:10.428653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:02:10.641698 containerd[1649]: time="2025-03-17T18:02:10.641570436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:10.642797 containerd[1649]: time="2025-03-17T18:02:10.642747355Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619794" Mar 17 18:02:10.643812 containerd[1649]: time="2025-03-17T18:02:10.643768878Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:10.646032 containerd[1649]: time="2025-03-17T18:02:10.645999473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:10.647017 containerd[1649]: time="2025-03-17T18:02:10.646919684Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 1.474357246s" Mar 17 18:02:10.647017 containerd[1649]: time="2025-03-17T18:02:10.646946465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 18:02:10.669802 containerd[1649]: time="2025-03-17T18:02:10.669765742Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:02:11.667521 containerd[1649]: time="2025-03-17T18:02:11.667458443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:11.668531 containerd[1649]: time="2025-03-17T18:02:11.668494575Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903331" Mar 17 18:02:11.669381 containerd[1649]: time="2025-03-17T18:02:11.669345295Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:11.673942 containerd[1649]: time="2025-03-17T18:02:11.673904216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:11.674670 containerd[1649]: time="2025-03-17T18:02:11.674650819Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.004848617s" Mar 17 18:02:11.674797 containerd[1649]: time="2025-03-17T18:02:11.674726111Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 18:02:11.693530 containerd[1649]: time="2025-03-17T18:02:11.693497247Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:02:12.708729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923128226.mount: Deactivated successfully. Mar 17 18:02:13.045409 containerd[1649]: time="2025-03-17T18:02:13.045147114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:13.046224 containerd[1649]: time="2025-03-17T18:02:13.046185709Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185400" Mar 17 18:02:13.046958 containerd[1649]: time="2025-03-17T18:02:13.046918967Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:13.048662 containerd[1649]: time="2025-03-17T18:02:13.048624665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:13.049344 containerd[1649]: time="2025-03-17T18:02:13.049145510Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.355618368s" Mar 17 18:02:13.049344 containerd[1649]: time="2025-03-17T18:02:13.049169996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 18:02:13.067562 containerd[1649]: time="2025-03-17T18:02:13.067534278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:02:13.503869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833408265.mount: Deactivated successfully. Mar 17 18:02:14.191699 containerd[1649]: time="2025-03-17T18:02:14.191658379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:14.193312 containerd[1649]: time="2025-03-17T18:02:14.193202981Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:14.193312 containerd[1649]: time="2025-03-17T18:02:14.193257534Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185843" Mar 17 18:02:14.195771 containerd[1649]: time="2025-03-17T18:02:14.195719682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:14.196881 containerd[1649]: time="2025-03-17T18:02:14.196715566Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.129154389s" Mar 17 18:02:14.196881 containerd[1649]: time="2025-03-17T18:02:14.196741765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:02:14.220606 containerd[1649]: time="2025-03-17T18:02:14.220555399Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:02:14.673033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431405388.mount: Deactivated successfully. Mar 17 18:02:14.682058 containerd[1649]: time="2025-03-17T18:02:14.681957618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:14.683470 containerd[1649]: time="2025-03-17T18:02:14.683400929Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322312" Mar 17 18:02:14.684291 containerd[1649]: time="2025-03-17T18:02:14.684198558Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:14.686971 containerd[1649]: time="2025-03-17T18:02:14.686896663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:14.688004 containerd[1649]: time="2025-03-17T18:02:14.687790253Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 467.2012ms" Mar 17 18:02:14.688004 containerd[1649]: time="2025-03-17T18:02:14.687842031Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 18:02:14.714557 containerd[1649]: time="2025-03-17T18:02:14.714352889Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:02:15.184100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3151624663.mount: Deactivated successfully. Mar 17 18:02:16.765335 containerd[1649]: time="2025-03-17T18:02:16.765263789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:16.766651 containerd[1649]: time="2025-03-17T18:02:16.766615134Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238653" Mar 17 18:02:16.767884 containerd[1649]: time="2025-03-17T18:02:16.767847765Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:16.770613 containerd[1649]: time="2025-03-17T18:02:16.770572358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:16.771686 containerd[1649]: time="2025-03-17T18:02:16.771390135Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.057006761s" Mar 17 18:02:16.771686 containerd[1649]: time="2025-03-17T18:02:16.771415002Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 18:02:19.103950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:02:19.109643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:02:19.131228 systemd[1]: Reloading requested from client PID 2625 ('systemctl') (unit session-7.scope)... Mar 17 18:02:19.131246 systemd[1]: Reloading... Mar 17 18:02:19.260194 zram_generator::config[2663]: No configuration found. Mar 17 18:02:19.362779 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:02:19.425215 systemd[1]: Reloading finished in 293 ms. Mar 17 18:02:19.477302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:02:19.479190 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:02:19.483372 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:02:19.483647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:02:19.489599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:02:19.630499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:02:19.639832 (kubelet)[2734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 18:02:19.689022 kubelet[2734]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:02:19.689022 kubelet[2734]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:02:19.689022 kubelet[2734]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:02:19.691169 kubelet[2734]: I0317 18:02:19.691109 2734 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:02:19.912179 kubelet[2734]: I0317 18:02:19.912017 2734 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:02:19.912179 kubelet[2734]: I0317 18:02:19.912048 2734 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:02:19.912427 kubelet[2734]: I0317 18:02:19.912221 2734 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:02:19.940578 kubelet[2734]: I0317 18:02:19.940522 2734 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:02:19.943351 kubelet[2734]: E0317 18:02:19.943104 2734 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://157.180.43.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:19.962234 kubelet[2734]: I0317 18:02:19.962194 2734 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:02:19.964772 kubelet[2734]: I0317 18:02:19.964692 2734 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:02:19.964974 kubelet[2734]: I0317 18:02:19.964729 2734 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-2-c2b93240d2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:02:19.964974 kubelet[2734]: I0317 18:02:19.964927 2734 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:02:19.964974 kubelet[2734]: I0317 18:02:19.964937 2734 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:02:19.965304 kubelet[2734]: I0317 18:02:19.965050 2734 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:02:19.965757 kubelet[2734]: I0317 18:02:19.965720 2734 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:02:19.965757 kubelet[2734]: I0317 18:02:19.965735 2734 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:02:19.965757 kubelet[2734]: I0317 18:02:19.965753 2734 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:02:19.965757 kubelet[2734]: I0317 18:02:19.965762 2734 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:02:19.970087 kubelet[2734]: W0317 18:02:19.970008 2734 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.43.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:19.970984 kubelet[2734]: E0317 18:02:19.970260 2734 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.180.43.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:19.970984 kubelet[2734]: W0317 18:02:19.970420 2734 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.43.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-2-c2b93240d2&limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:19.970984 kubelet[2734]: E0317 18:02:19.970478 2734 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.180.43.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-2-c2b93240d2&limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:19.970984 kubelet[2734]: I0317 18:02:19.970600 2734 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 18:02:19.973282 kubelet[2734]: I0317 18:02:19.973248 2734 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:02:19.973647 kubelet[2734]: W0317 18:02:19.973442 2734 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:02:19.975258 kubelet[2734]: I0317 18:02:19.975208 2734 server.go:1264] "Started kubelet" Mar 17 18:02:19.978617 kubelet[2734]: I0317 18:02:19.978367 2734 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:02:19.980915 kubelet[2734]: I0317 18:02:19.980368 2734 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:02:19.983700 kubelet[2734]: I0317 18:02:19.983685 2734 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:02:19.984308 kubelet[2734]: I0317 18:02:19.984224 2734 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:02:19.985446 kubelet[2734]: I0317 18:02:19.984787 2734 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:02:19.985446 kubelet[2734]: E0317 18:02:19.985172 2734 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.43.77:6443/api/v1/namespaces/default/events\": dial tcp 157.180.43.77:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-2-2-c2b93240d2.182da91556a2ecff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-2-2-c2b93240d2,UID:ci-4152-2-2-2-c2b93240d2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-2-c2b93240d2,},FirstTimestamp:2025-03-17 18:02:19.975175423 +0000 UTC m=+0.331470169,LastTimestamp:2025-03-17 18:02:19.975175423 +0000 UTC m=+0.331470169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-2-c2b93240d2,}" Mar 17 18:02:19.992944 kubelet[2734]: I0317 18:02:19.992924 2734 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:02:19.994397 kubelet[2734]: I0317 18:02:19.993765 2734 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:02:19.994397 kubelet[2734]: I0317 18:02:19.993836 2734 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:02:19.994594 kubelet[2734]: W0317 18:02:19.994562 2734 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.43.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:19.994674 kubelet[2734]: E0317 18:02:19.994663 2734 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.180.43.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:19.994788 kubelet[2734]: E0317 18:02:19.994766 2734 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.43.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-2-c2b93240d2?timeout=10s\": dial tcp 157.180.43.77:6443: connect: connection refused" interval="200ms" Mar 17 18:02:19.995181 kubelet[2734]: I0317 18:02:19.995165 2734 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:02:19.995390 kubelet[2734]: I0317 18:02:19.995375 2734 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:02:19.998381 kubelet[2734]: I0317 18:02:19.998368 2734 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:02:20.024938 kubelet[2734]: I0317 18:02:20.024769 2734 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:02:20.026801 kubelet[2734]: I0317 18:02:20.026349 2734 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:02:20.026801 kubelet[2734]: I0317 18:02:20.026384 2734 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:02:20.026801 kubelet[2734]: I0317 18:02:20.026413 2734 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:02:20.026801 kubelet[2734]: E0317 18:02:20.026459 2734 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:02:20.031354 kubelet[2734]: I0317 18:02:20.031331 2734 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:02:20.031354 kubelet[2734]: I0317 18:02:20.031348 2734 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:02:20.031447 kubelet[2734]: I0317 18:02:20.031373 2734 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:02:20.032966 kubelet[2734]: I0317 18:02:20.032944 2734 policy_none.go:49] "None policy: Start" Mar 17 18:02:20.033417 kubelet[2734]: I0317 18:02:20.033397 2734 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:02:20.033417 kubelet[2734]: I0317 18:02:20.033419 2734 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:02:20.034047 kubelet[2734]: W0317 18:02:20.034025 2734 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.43.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:20.034127 kubelet[2734]: E0317 18:02:20.034115 2734 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.180.43.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:20.039128 kubelet[2734]: I0317 18:02:20.039098 2734 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:02:20.039285 kubelet[2734]: I0317 18:02:20.039247 2734 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:02:20.039389 kubelet[2734]: I0317 18:02:20.039367 2734 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:02:20.045672 kubelet[2734]: E0317 18:02:20.045656 2734 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-2-2-c2b93240d2\" not found" Mar 17 18:02:20.095718 kubelet[2734]: I0317 18:02:20.095665 2734 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.096050 kubelet[2734]: E0317 18:02:20.096000 2734 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.43.77:6443/api/v1/nodes\": dial tcp 157.180.43.77:6443: connect: connection refused" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.127429 kubelet[2734]: I0317 18:02:20.127372 2734 topology_manager.go:215] "Topology Admit Handler" podUID="03b64fb391808fed97c9de36328ec25a" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.129050 kubelet[2734]: I0317 18:02:20.128992 2734 topology_manager.go:215] "Topology Admit Handler" podUID="287f41dc6d4ec93844315a960ed65279" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.130887 kubelet[2734]: I0317 18:02:20.130659 2734 topology_manager.go:215] "Topology Admit Handler" podUID="be81feb501b16f752e62adfbff9952de" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.196399 kubelet[2734]: E0317 18:02:20.196259 2734 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.43.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-2-c2b93240d2?timeout=10s\": dial tcp 157.180.43.77:6443: connect: connection refused" interval="400ms" Mar 17 18:02:20.294940 kubelet[2734]: I0317 18:02:20.294799 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/287f41dc6d4ec93844315a960ed65279-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-2-c2b93240d2\" (UID: \"287f41dc6d4ec93844315a960ed65279\") " pod="kube-system/kube-apiserver-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.294940 kubelet[2734]: I0317 18:02:20.294905 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/287f41dc6d4ec93844315a960ed65279-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-2-c2b93240d2\" (UID: \"287f41dc6d4ec93844315a960ed65279\") " pod="kube-system/kube-apiserver-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.294940 kubelet[2734]: I0317 18:02:20.294952 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/be81feb501b16f752e62adfbff9952de-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-2-c2b93240d2\" (UID: \"be81feb501b16f752e62adfbff9952de\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.295245 kubelet[2734]: I0317 18:02:20.295012 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be81feb501b16f752e62adfbff9952de-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-2-c2b93240d2\" (UID: \"be81feb501b16f752e62adfbff9952de\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.295245 kubelet[2734]: I0317 18:02:20.295068 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be81feb501b16f752e62adfbff9952de-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-2-c2b93240d2\" (UID: \"be81feb501b16f752e62adfbff9952de\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.295245 kubelet[2734]: I0317 18:02:20.295102 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03b64fb391808fed97c9de36328ec25a-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-2-c2b93240d2\" (UID: \"03b64fb391808fed97c9de36328ec25a\") " pod="kube-system/kube-scheduler-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.295245 kubelet[2734]: I0317 18:02:20.295128 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/287f41dc6d4ec93844315a960ed65279-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-2-c2b93240d2\" (UID: \"287f41dc6d4ec93844315a960ed65279\") " pod="kube-system/kube-apiserver-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.295245 kubelet[2734]: I0317 18:02:20.295153 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be81feb501b16f752e62adfbff9952de-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-2-c2b93240d2\" (UID: \"be81feb501b16f752e62adfbff9952de\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.295407 kubelet[2734]: I0317 18:02:20.295178 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be81feb501b16f752e62adfbff9952de-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-2-c2b93240d2\" (UID: \"be81feb501b16f752e62adfbff9952de\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.299457 kubelet[2734]: I0317 18:02:20.299410 2734 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.299970 kubelet[2734]: E0317 18:02:20.299895 2734 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.43.77:6443/api/v1/nodes\": dial tcp 157.180.43.77:6443: connect: connection refused" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.439274 containerd[1649]: time="2025-03-17T18:02:20.438817419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-2-c2b93240d2,Uid:03b64fb391808fed97c9de36328ec25a,Namespace:kube-system,Attempt:0,}" Mar 17 18:02:20.439274 containerd[1649]: time="2025-03-17T18:02:20.438917879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-2-c2b93240d2,Uid:287f41dc6d4ec93844315a960ed65279,Namespace:kube-system,Attempt:0,}" Mar 17 18:02:20.443453 containerd[1649]: time="2025-03-17T18:02:20.443397184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-2-c2b93240d2,Uid:be81feb501b16f752e62adfbff9952de,Namespace:kube-system,Attempt:0,}" Mar 17 18:02:20.598386 kubelet[2734]: E0317 18:02:20.597434 2734 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.43.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-2-c2b93240d2?timeout=10s\": dial tcp 157.180.43.77:6443: connect: connection refused" interval="800ms" Mar 17 18:02:20.703159 kubelet[2734]: I0317 18:02:20.703076 2734 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.704192 kubelet[2734]: E0317 18:02:20.703728 2734 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.43.77:6443/api/v1/nodes\": dial tcp 157.180.43.77:6443: connect: connection refused" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:20.894194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601888585.mount: Deactivated successfully. Mar 17 18:02:20.906454 containerd[1649]: time="2025-03-17T18:02:20.906311217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:02:20.908759 containerd[1649]: time="2025-03-17T18:02:20.908676778Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Mar 17 18:02:20.911902 containerd[1649]: time="2025-03-17T18:02:20.911643886Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:02:20.913118 containerd[1649]: time="2025-03-17T18:02:20.913062948Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:02:20.915215 containerd[1649]: time="2025-03-17T18:02:20.915084128Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 18:02:20.916591 containerd[1649]: time="2025-03-17T18:02:20.916410906Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 18:02:20.916591 containerd[1649]: time="2025-03-17T18:02:20.916518970Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:02:20.923027 containerd[1649]: time="2025-03-17T18:02:20.922933372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:02:20.926366 containerd[1649]: time="2025-03-17T18:02:20.924457774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.429107ms" Mar 17 18:02:20.929382 containerd[1649]: time="2025-03-17T18:02:20.929278786Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.783084ms" Mar 17 18:02:20.931351 containerd[1649]: time="2025-03-17T18:02:20.931284006Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.311263ms" Mar 17 18:02:21.057018 containerd[1649]: time="2025-03-17T18:02:21.054646480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:02:21.057018 containerd[1649]: time="2025-03-17T18:02:21.056941698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:02:21.057454 containerd[1649]: time="2025-03-17T18:02:21.056990020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:02:21.057454 containerd[1649]: time="2025-03-17T18:02:21.057059942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:21.057454 containerd[1649]: time="2025-03-17T18:02:21.057210837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:21.058796 containerd[1649]: time="2025-03-17T18:02:21.058309162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:02:21.058796 containerd[1649]: time="2025-03-17T18:02:21.058374546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:21.058796 containerd[1649]: time="2025-03-17T18:02:21.058505453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:21.059392 containerd[1649]: time="2025-03-17T18:02:21.059141986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:02:21.059392 containerd[1649]: time="2025-03-17T18:02:21.059218671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:02:21.059392 containerd[1649]: time="2025-03-17T18:02:21.059232707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:21.060335 containerd[1649]: time="2025-03-17T18:02:21.059312849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:21.145899 containerd[1649]: time="2025-03-17T18:02:21.145784662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-2-c2b93240d2,Uid:be81feb501b16f752e62adfbff9952de,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ea5dc64b185e002eed5573a1d7f997c509da889766ff383138d41bd19b98e0d\"" Mar 17 18:02:21.156604 containerd[1649]: time="2025-03-17T18:02:21.155199895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-2-c2b93240d2,Uid:287f41dc6d4ec93844315a960ed65279,Namespace:kube-system,Attempt:0,} returns sandbox id \"c211beccfcfcc6c5e1d316091151581db755504eeb18df2cafc84c7a8c9e1b66\"" Mar 17 18:02:21.158276 containerd[1649]: time="2025-03-17T18:02:21.158222597Z" level=info msg="CreateContainer within sandbox \"6ea5dc64b185e002eed5573a1d7f997c509da889766ff383138d41bd19b98e0d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:02:21.159328 containerd[1649]: time="2025-03-17T18:02:21.158764892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-2-c2b93240d2,Uid:03b64fb391808fed97c9de36328ec25a,Namespace:kube-system,Attempt:0,} returns sandbox id \"41f78a8c15b247360d112964a2df96b65c4e3df277f9e4bc4ea87844d2684e0c\"" Mar 17 18:02:21.165562 containerd[1649]: time="2025-03-17T18:02:21.163685279Z" level=info msg="CreateContainer within sandbox \"c211beccfcfcc6c5e1d316091151581db755504eeb18df2cafc84c7a8c9e1b66\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:02:21.166233 containerd[1649]: time="2025-03-17T18:02:21.166199970Z" level=info msg="CreateContainer within sandbox \"41f78a8c15b247360d112964a2df96b65c4e3df277f9e4bc4ea87844d2684e0c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:02:21.177003 kubelet[2734]: W0317 18:02:21.176962 2734 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.43.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:21.177125 kubelet[2734]: E0317 18:02:21.177115 2734 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.180.43.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:21.180455 kubelet[2734]: W0317 18:02:21.180400 2734 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.43.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:21.180520 kubelet[2734]: E0317 18:02:21.180460 2734 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.180.43.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:21.218666 containerd[1649]: time="2025-03-17T18:02:21.218620829Z" level=info msg="CreateContainer within sandbox \"c211beccfcfcc6c5e1d316091151581db755504eeb18df2cafc84c7a8c9e1b66\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fc4c623247d4e47b6656a681903905d68770ef2ce12b16cd340df5053e007804\"" Mar 17 18:02:21.219241 containerd[1649]: time="2025-03-17T18:02:21.219210945Z" level=info msg="StartContainer for \"fc4c623247d4e47b6656a681903905d68770ef2ce12b16cd340df5053e007804\"" Mar 17 18:02:21.220443 containerd[1649]: time="2025-03-17T18:02:21.220275527Z" level=info msg="CreateContainer within sandbox \"41f78a8c15b247360d112964a2df96b65c4e3df277f9e4bc4ea87844d2684e0c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9fee7570f5177368fd911b07e68fc07b2fb0ff6ccd651c8bce2892dd227d4d0b\"" Mar 17 18:02:21.221918 containerd[1649]: time="2025-03-17T18:02:21.221791701Z" level=info msg="CreateContainer within sandbox \"6ea5dc64b185e002eed5573a1d7f997c509da889766ff383138d41bd19b98e0d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7938860e061915e152fd2fb8894ec45107adf2e985d373f733360ce026dc908c\"" Mar 17 18:02:21.223030 containerd[1649]: time="2025-03-17T18:02:21.222288330Z" level=info msg="StartContainer for \"9fee7570f5177368fd911b07e68fc07b2fb0ff6ccd651c8bce2892dd227d4d0b\"" Mar 17 18:02:21.228520 containerd[1649]: time="2025-03-17T18:02:21.228502653Z" level=info msg="StartContainer for \"7938860e061915e152fd2fb8894ec45107adf2e985d373f733360ce026dc908c\"" Mar 17 18:02:21.299554 kubelet[2734]: W0317 18:02:21.299515 2734 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.43.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:21.299554 kubelet[2734]: E0317 18:02:21.299572 2734 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.180.43.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:21.302830 kubelet[2734]: W0317 18:02:21.302761 2734 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.43.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-2-c2b93240d2&limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:21.302830 kubelet[2734]: E0317 18:02:21.302831 2734 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.180.43.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-2-c2b93240d2&limit=500&resourceVersion=0": dial tcp 157.180.43.77:6443: connect: connection refused Mar 17 18:02:21.303744 containerd[1649]: time="2025-03-17T18:02:21.303691005Z" level=info msg="StartContainer for \"fc4c623247d4e47b6656a681903905d68770ef2ce12b16cd340df5053e007804\" returns successfully" Mar 17 18:02:21.341998 containerd[1649]: time="2025-03-17T18:02:21.341949381Z" level=info msg="StartContainer for \"7938860e061915e152fd2fb8894ec45107adf2e985d373f733360ce026dc908c\" returns successfully" Mar 17 18:02:21.342514 containerd[1649]: time="2025-03-17T18:02:21.342015065Z" level=info msg="StartContainer for \"9fee7570f5177368fd911b07e68fc07b2fb0ff6ccd651c8bce2892dd227d4d0b\" returns successfully" Mar 17 18:02:21.399541 kubelet[2734]: E0317 18:02:21.399260 2734 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.43.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-2-c2b93240d2?timeout=10s\": dial tcp 157.180.43.77:6443: connect: connection refused" interval="1.6s" Mar 17 18:02:21.507637 kubelet[2734]: I0317 18:02:21.507605 2734 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:21.507908 kubelet[2734]: E0317 18:02:21.507882 2734 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.43.77:6443/api/v1/nodes\": dial tcp 157.180.43.77:6443: connect: connection refused" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:23.002870 kubelet[2734]: E0317 18:02:23.002810 2734 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-2-2-c2b93240d2\" not found" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:23.111393 kubelet[2734]: I0317 18:02:23.111304 2734 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:23.126648 kubelet[2734]: I0317 18:02:23.126591 2734 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:23.135602 kubelet[2734]: E0317 18:02:23.135567 2734 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-2-c2b93240d2\" not found" Mar 17 18:02:23.236854 kubelet[2734]: E0317 18:02:23.236720 2734 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-2-c2b93240d2\" not found" Mar 17 18:02:23.337623 kubelet[2734]: E0317 18:02:23.337424 2734 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-2-c2b93240d2\" not found" Mar 17 18:02:23.438370 kubelet[2734]: E0317 18:02:23.438283 2734 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-2-c2b93240d2\" not found" Mar 17 18:02:23.538951 kubelet[2734]: E0317 18:02:23.538868 2734 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-2-c2b93240d2\" not found" Mar 17 18:02:23.639658 kubelet[2734]: E0317 18:02:23.639446 2734 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-2-c2b93240d2\" not found" Mar 17 18:02:23.740043 kubelet[2734]: E0317 18:02:23.739989 2734 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-2-c2b93240d2\" not found" Mar 17 18:02:23.841022 kubelet[2734]: E0317 18:02:23.840956 2734 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-2-c2b93240d2\" not found" Mar 17 18:02:23.941753 kubelet[2734]: E0317 18:02:23.941643 2734 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-2-c2b93240d2\" not found" Mar 17 18:02:24.973100 kubelet[2734]: I0317 18:02:24.973004 2734 apiserver.go:52] "Watching apiserver" Mar 17 18:02:24.994966 kubelet[2734]: I0317 18:02:24.994905 2734 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:02:25.091263 systemd[1]: Reloading requested from client PID 3007 ('systemctl') (unit session-7.scope)... Mar 17 18:02:25.091298 systemd[1]: Reloading... Mar 17 18:02:25.177362 zram_generator::config[3047]: No configuration found. Mar 17 18:02:25.280697 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:02:25.346764 systemd[1]: Reloading finished in 254 ms. Mar 17 18:02:25.379800 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:02:25.380307 kubelet[2734]: E0317 18:02:25.379729 2734 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4152-2-2-2-c2b93240d2.182da91556a2ecff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-2-2-c2b93240d2,UID:ci-4152-2-2-2-c2b93240d2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-2-c2b93240d2,},FirstTimestamp:2025-03-17 18:02:19.975175423 +0000 UTC m=+0.331470169,LastTimestamp:2025-03-17 18:02:19.975175423 +0000 UTC m=+0.331470169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-2-c2b93240d2,}" Mar 17 18:02:25.398540 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:02:25.398924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:02:25.410903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:02:25.545451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:02:25.549282 (kubelet)[3108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 18:02:25.590074 kubelet[3108]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:02:25.590074 kubelet[3108]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:02:25.590074 kubelet[3108]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:02:25.590644 kubelet[3108]: I0317 18:02:25.590100 3108 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:02:25.594124 kubelet[3108]: I0317 18:02:25.594093 3108 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:02:25.594124 kubelet[3108]: I0317 18:02:25.594113 3108 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:02:25.594256 kubelet[3108]: I0317 18:02:25.594233 3108 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:02:25.595587 kubelet[3108]: I0317 18:02:25.595555 3108 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:02:25.596993 kubelet[3108]: I0317 18:02:25.596972 3108 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:02:25.611464 kubelet[3108]: I0317 18:02:25.610580 3108 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:02:25.612707 kubelet[3108]: I0317 18:02:25.612660 3108 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:02:25.612883 kubelet[3108]: I0317 18:02:25.612686 3108 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-2-c2b93240d2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:02:25.612983 kubelet[3108]: I0317 18:02:25.612885 3108 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:02:25.612983 kubelet[3108]: I0317 18:02:25.612896 3108 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:02:25.612983 kubelet[3108]: I0317 18:02:25.612941 3108 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:02:25.613172 kubelet[3108]: I0317 18:02:25.613152 3108 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:02:25.613172 kubelet[3108]: I0317 18:02:25.613166 3108 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:02:25.613219 kubelet[3108]: I0317 18:02:25.613185 3108 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:02:25.613219 kubelet[3108]: I0317 18:02:25.613204 3108 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:02:25.616802 kubelet[3108]: I0317 18:02:25.615575 3108 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 18:02:25.616802 kubelet[3108]: I0317 18:02:25.615725 3108 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:02:25.616802 kubelet[3108]: I0317 18:02:25.616094 3108 server.go:1264] "Started kubelet" Mar 17 18:02:25.618310 kubelet[3108]: I0317 18:02:25.618275 3108 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:02:25.627158 kubelet[3108]: I0317 18:02:25.625923 3108 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:02:25.627158 kubelet[3108]: I0317 18:02:25.626886 3108 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:02:25.627953 kubelet[3108]: I0317 18:02:25.627681 3108 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:02:25.627953 kubelet[3108]: I0317 18:02:25.627950 3108 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:02:25.631337 kubelet[3108]: I0317 18:02:25.629873 3108 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:02:25.631650 kubelet[3108]: I0317 18:02:25.631625 3108 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:02:25.633250 kubelet[3108]: I0317 18:02:25.631739 3108 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:02:25.633651 kubelet[3108]: I0317 18:02:25.633629 3108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:02:25.635312 kubelet[3108]: I0317 18:02:25.635295 3108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:02:25.635409 kubelet[3108]: I0317 18:02:25.635400 3108 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:02:25.635485 kubelet[3108]: I0317 18:02:25.635473 3108 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:02:25.635629 kubelet[3108]: E0317 18:02:25.635612 3108 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:02:25.638974 kubelet[3108]: I0317 18:02:25.638949 3108 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:02:25.639040 kubelet[3108]: I0317 18:02:25.639016 3108 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:02:25.641200 kubelet[3108]: I0317 18:02:25.641173 3108 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:02:25.699327 kubelet[3108]: I0317 18:02:25.699289 3108 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:02:25.699327 kubelet[3108]: I0317 18:02:25.699305 3108 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:02:25.699469 kubelet[3108]: I0317 18:02:25.699348 3108 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:02:25.699502 kubelet[3108]: I0317 18:02:25.699475 3108 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:02:25.699502 kubelet[3108]: I0317 18:02:25.699485 3108 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:02:25.699502 kubelet[3108]: I0317 18:02:25.699501 3108 policy_none.go:49] "None policy: Start" Mar 17 18:02:25.700037 kubelet[3108]: I0317 18:02:25.700010 3108 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:02:25.700037 kubelet[3108]: I0317 18:02:25.700033 3108 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:02:25.700153 kubelet[3108]: I0317 18:02:25.700129 3108 state_mem.go:75] "Updated machine memory state" Mar 17 18:02:25.701535 kubelet[3108]: I0317 18:02:25.701508 3108 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:02:25.704172 kubelet[3108]: I0317 18:02:25.702288 3108 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:02:25.704172 kubelet[3108]: I0317 18:02:25.702420 3108 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:02:25.735286 kubelet[3108]: I0317 18:02:25.735254 3108 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.737877 kubelet[3108]: I0317 18:02:25.737673 3108 topology_manager.go:215] "Topology Admit Handler" podUID="be81feb501b16f752e62adfbff9952de" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.737877 kubelet[3108]: I0317 18:02:25.737743 3108 topology_manager.go:215] "Topology Admit Handler" podUID="03b64fb391808fed97c9de36328ec25a" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.737877 kubelet[3108]: I0317 18:02:25.737784 3108 topology_manager.go:215] "Topology Admit Handler" podUID="287f41dc6d4ec93844315a960ed65279" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.746100 kubelet[3108]: E0317 18:02:25.746054 3108 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-2-2-c2b93240d2\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.746741 kubelet[3108]: I0317 18:02:25.746674 3108 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.746741 kubelet[3108]: I0317 18:02:25.746741 3108 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.833051 kubelet[3108]: I0317 18:02:25.832897 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be81feb501b16f752e62adfbff9952de-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-2-c2b93240d2\" (UID: \"be81feb501b16f752e62adfbff9952de\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.833051 kubelet[3108]: I0317 18:02:25.832963 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be81feb501b16f752e62adfbff9952de-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-2-c2b93240d2\" (UID: \"be81feb501b16f752e62adfbff9952de\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.833051 kubelet[3108]: I0317 18:02:25.833005 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03b64fb391808fed97c9de36328ec25a-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-2-c2b93240d2\" (UID: \"03b64fb391808fed97c9de36328ec25a\") " pod="kube-system/kube-scheduler-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.833051 kubelet[3108]: I0317 18:02:25.833046 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/287f41dc6d4ec93844315a960ed65279-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-2-c2b93240d2\" (UID: \"287f41dc6d4ec93844315a960ed65279\") " pod="kube-system/kube-apiserver-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.833291 kubelet[3108]: I0317 18:02:25.833083 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/be81feb501b16f752e62adfbff9952de-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-2-c2b93240d2\" (UID: \"be81feb501b16f752e62adfbff9952de\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.833291 kubelet[3108]: I0317 18:02:25.833115 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be81feb501b16f752e62adfbff9952de-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-2-c2b93240d2\" (UID: \"be81feb501b16f752e62adfbff9952de\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.833291 kubelet[3108]: I0317 18:02:25.833148 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be81feb501b16f752e62adfbff9952de-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-2-c2b93240d2\" (UID: \"be81feb501b16f752e62adfbff9952de\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.833291 kubelet[3108]: I0317 18:02:25.833183 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/287f41dc6d4ec93844315a960ed65279-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-2-c2b93240d2\" (UID: \"287f41dc6d4ec93844315a960ed65279\") " pod="kube-system/kube-apiserver-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:25.833291 kubelet[3108]: I0317 18:02:25.833215 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/287f41dc6d4ec93844315a960ed65279-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-2-c2b93240d2\" (UID: \"287f41dc6d4ec93844315a960ed65279\") " pod="kube-system/kube-apiserver-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:26.109022 sudo[3139]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:02:26.109761 sudo[3139]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 18:02:26.613573 kubelet[3108]: I0317 18:02:26.613528 3108 apiserver.go:52] "Watching apiserver" Mar 17 18:02:26.634018 kubelet[3108]: I0317 18:02:26.632593 3108 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:02:26.647776 sudo[3139]: pam_unix(sudo:session): session closed for user root Mar 17 18:02:26.679507 kubelet[3108]: E0317 18:02:26.679476 3108 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-2-2-c2b93240d2\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-2-2-c2b93240d2" Mar 17 18:02:26.691160 kubelet[3108]: I0317 18:02:26.691116 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-2-2-c2b93240d2" podStartSLOduration=2.691103106 podStartE2EDuration="2.691103106s" podCreationTimestamp="2025-03-17 18:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:02:26.690506279 +0000 UTC m=+1.134336154" watchObservedRunningTime="2025-03-17 18:02:26.691103106 +0000 UTC m=+1.134933002" Mar 17 18:02:26.702524 kubelet[3108]: I0317 18:02:26.702481 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" podStartSLOduration=1.70247002 podStartE2EDuration="1.70247002s" podCreationTimestamp="2025-03-17 18:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:02:26.702295862 +0000 UTC m=+1.146125726" watchObservedRunningTime="2025-03-17 18:02:26.70247002 +0000 UTC m=+1.146299885" Mar 17 18:02:26.711605 kubelet[3108]: I0317 18:02:26.711479 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-2-2-c2b93240d2" podStartSLOduration=1.711463954 podStartE2EDuration="1.711463954s" podCreationTimestamp="2025-03-17 18:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:02:26.710923444 +0000 UTC m=+1.154753319" watchObservedRunningTime="2025-03-17 18:02:26.711463954 +0000 UTC m=+1.155293819" Mar 17 18:02:28.389089 sudo[2134]: pam_unix(sudo:session): session closed for user root Mar 17 18:02:28.546152 sshd[2133]: Connection closed by 139.178.68.195 port 58668 Mar 17 18:02:28.548058 sshd-session[2130]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:28.553406 systemd[1]: sshd@6-157.180.43.77:22-139.178.68.195:58668.service: Deactivated successfully. Mar 17 18:02:28.563031 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:02:28.565455 systemd-logind[1624]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:02:28.568363 systemd-logind[1624]: Removed session 7. Mar 17 18:02:40.834231 kubelet[3108]: I0317 18:02:40.834190 3108 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:02:40.834706 kubelet[3108]: I0317 18:02:40.834672 3108 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:02:40.834754 containerd[1649]: time="2025-03-17T18:02:40.834499032Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:02:41.788812 kubelet[3108]: I0317 18:02:41.788750 3108 topology_manager.go:215] "Topology Admit Handler" podUID="ae07cba6-2860-4935-a314-9e5437319ee8" podNamespace="kube-system" podName="kube-proxy-kvk8x" Mar 17 18:02:41.806419 kubelet[3108]: I0317 18:02:41.806227 3108 topology_manager.go:215] "Topology Admit Handler" podUID="beb5c595-6ad8-4c10-bd42-4ea1f075736d" podNamespace="kube-system" podName="cilium-hhnbf" Mar 17 18:02:41.865650 kubelet[3108]: I0317 18:02:41.863629 3108 topology_manager.go:215] "Topology Admit Handler" podUID="0c631824-e69f-4681-b0f5-d67417577ed5" podNamespace="kube-system" podName="cilium-operator-599987898-d6klr" Mar 17 18:02:41.934436 kubelet[3108]: I0317 18:02:41.934400 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/beb5c595-6ad8-4c10-bd42-4ea1f075736d-clustermesh-secrets\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.934611 kubelet[3108]: I0317 18:02:41.934590 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-config-path\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.934741 kubelet[3108]: I0317 18:02:41.934709 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tqk\" (UniqueName: \"kubernetes.io/projected/ae07cba6-2860-4935-a314-9e5437319ee8-kube-api-access-r4tqk\") pod \"kube-proxy-kvk8x\" (UID: \"ae07cba6-2860-4935-a314-9e5437319ee8\") " pod="kube-system/kube-proxy-kvk8x" Mar 17 18:02:41.934841 kubelet[3108]: I0317 18:02:41.934824 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-lib-modules\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.934935 kubelet[3108]: I0317 18:02:41.934919 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cni-path\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.935044 kubelet[3108]: I0317 18:02:41.935025 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae07cba6-2860-4935-a314-9e5437319ee8-kube-proxy\") pod \"kube-proxy-kvk8x\" (UID: \"ae07cba6-2860-4935-a314-9e5437319ee8\") " pod="kube-system/kube-proxy-kvk8x" Mar 17 18:02:41.935184 kubelet[3108]: I0317 18:02:41.935121 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-cgroup\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.935184 kubelet[3108]: I0317 18:02:41.935151 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-host-proc-sys-net\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.935184 kubelet[3108]: I0317 18:02:41.935176 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-run\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.935546 kubelet[3108]: I0317 18:02:41.935203 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-etc-cni-netd\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.935546 kubelet[3108]: I0317 18:02:41.935227 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/beb5c595-6ad8-4c10-bd42-4ea1f075736d-hubble-tls\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.935546 kubelet[3108]: I0317 18:02:41.935252 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae07cba6-2860-4935-a314-9e5437319ee8-lib-modules\") pod \"kube-proxy-kvk8x\" (UID: \"ae07cba6-2860-4935-a314-9e5437319ee8\") " pod="kube-system/kube-proxy-kvk8x" Mar 17 18:02:41.935546 kubelet[3108]: I0317 18:02:41.935274 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-bpf-maps\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.935546 kubelet[3108]: I0317 18:02:41.935298 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-hostproc\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.935546 kubelet[3108]: I0317 18:02:41.935334 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-xtables-lock\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.935703 kubelet[3108]: I0317 18:02:41.935358 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-host-proc-sys-kernel\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.935703 kubelet[3108]: I0317 18:02:41.935407 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld5fq\" (UniqueName: \"kubernetes.io/projected/beb5c595-6ad8-4c10-bd42-4ea1f075736d-kube-api-access-ld5fq\") pod \"cilium-hhnbf\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " pod="kube-system/cilium-hhnbf" Mar 17 18:02:41.935703 kubelet[3108]: I0317 18:02:41.935433 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae07cba6-2860-4935-a314-9e5437319ee8-xtables-lock\") pod \"kube-proxy-kvk8x\" (UID: \"ae07cba6-2860-4935-a314-9e5437319ee8\") " pod="kube-system/kube-proxy-kvk8x" Mar 17 18:02:42.037114 kubelet[3108]: I0317 18:02:42.035690 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c631824-e69f-4681-b0f5-d67417577ed5-cilium-config-path\") pod \"cilium-operator-599987898-d6klr\" (UID: \"0c631824-e69f-4681-b0f5-d67417577ed5\") " pod="kube-system/cilium-operator-599987898-d6klr" Mar 17 18:02:42.037114 kubelet[3108]: I0317 18:02:42.035899 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xct\" (UniqueName: \"kubernetes.io/projected/0c631824-e69f-4681-b0f5-d67417577ed5-kube-api-access-49xct\") pod \"cilium-operator-599987898-d6klr\" (UID: \"0c631824-e69f-4681-b0f5-d67417577ed5\") " pod="kube-system/cilium-operator-599987898-d6klr" Mar 17 18:02:42.108258 containerd[1649]: time="2025-03-17T18:02:42.108104378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvk8x,Uid:ae07cba6-2860-4935-a314-9e5437319ee8,Namespace:kube-system,Attempt:0,}" Mar 17 18:02:42.116827 containerd[1649]: time="2025-03-17T18:02:42.116747954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hhnbf,Uid:beb5c595-6ad8-4c10-bd42-4ea1f075736d,Namespace:kube-system,Attempt:0,}" Mar 17 18:02:42.180673 containerd[1649]: time="2025-03-17T18:02:42.173384249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:02:42.180673 containerd[1649]: time="2025-03-17T18:02:42.173537347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:02:42.180673 containerd[1649]: time="2025-03-17T18:02:42.173559920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:42.180673 containerd[1649]: time="2025-03-17T18:02:42.174742058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:42.180673 containerd[1649]: time="2025-03-17T18:02:42.178022865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:02:42.180673 containerd[1649]: time="2025-03-17T18:02:42.178058110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:02:42.180673 containerd[1649]: time="2025-03-17T18:02:42.178074491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:42.180673 containerd[1649]: time="2025-03-17T18:02:42.179101628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:42.233094 containerd[1649]: time="2025-03-17T18:02:42.232645021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hhnbf,Uid:beb5c595-6ad8-4c10-bd42-4ea1f075736d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\"" Mar 17 18:02:42.235346 containerd[1649]: time="2025-03-17T18:02:42.234841303Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:02:42.236027 containerd[1649]: time="2025-03-17T18:02:42.235991551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvk8x,Uid:ae07cba6-2860-4935-a314-9e5437319ee8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2708c4fae77de2635150e6158a0cc776cf3d4e0c507b0f7adcb11e8c7f0e48f\"" Mar 17 18:02:42.239014 containerd[1649]: time="2025-03-17T18:02:42.238897109Z" level=info msg="CreateContainer within sandbox \"e2708c4fae77de2635150e6158a0cc776cf3d4e0c507b0f7adcb11e8c7f0e48f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:02:42.250841 containerd[1649]: time="2025-03-17T18:02:42.250820047Z" level=info msg="CreateContainer within sandbox \"e2708c4fae77de2635150e6158a0cc776cf3d4e0c507b0f7adcb11e8c7f0e48f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f80d86a0485e27e55777c1273e0efcd0e4a673b399704354090d8d7abf86a847\"" Mar 17 18:02:42.252356 containerd[1649]: time="2025-03-17T18:02:42.251368771Z" level=info msg="StartContainer for \"f80d86a0485e27e55777c1273e0efcd0e4a673b399704354090d8d7abf86a847\"" Mar 17 18:02:42.304236 containerd[1649]: time="2025-03-17T18:02:42.304187470Z" level=info msg="StartContainer for \"f80d86a0485e27e55777c1273e0efcd0e4a673b399704354090d8d7abf86a847\" returns successfully" Mar 17 18:02:42.475912 containerd[1649]: time="2025-03-17T18:02:42.475093544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-d6klr,Uid:0c631824-e69f-4681-b0f5-d67417577ed5,Namespace:kube-system,Attempt:0,}" Mar 17 18:02:42.524679 containerd[1649]: time="2025-03-17T18:02:42.524287028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:02:42.526108 containerd[1649]: time="2025-03-17T18:02:42.525167918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:02:42.526108 containerd[1649]: time="2025-03-17T18:02:42.525226479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:42.526108 containerd[1649]: time="2025-03-17T18:02:42.525338240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:42.581988 containerd[1649]: time="2025-03-17T18:02:42.581906065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-d6klr,Uid:0c631824-e69f-4681-b0f5-d67417577ed5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\"" Mar 17 18:02:45.665512 kubelet[3108]: I0317 18:02:45.665337 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kvk8x" podStartSLOduration=4.665306453 podStartE2EDuration="4.665306453s" podCreationTimestamp="2025-03-17 18:02:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:02:42.713680807 +0000 UTC m=+17.157510691" watchObservedRunningTime="2025-03-17 18:02:45.665306453 +0000 UTC m=+20.109136318" Mar 17 18:02:46.212622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3654803398.mount: Deactivated successfully. Mar 17 18:02:47.685837 containerd[1649]: time="2025-03-17T18:02:47.685772977Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:47.687238 containerd[1649]: time="2025-03-17T18:02:47.687160692Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 17 18:02:47.688101 containerd[1649]: time="2025-03-17T18:02:47.687430801Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:47.689019 containerd[1649]: time="2025-03-17T18:02:47.688922733Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.45334366s" Mar 17 18:02:47.689019 containerd[1649]: time="2025-03-17T18:02:47.688949123Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:02:47.690143 containerd[1649]: time="2025-03-17T18:02:47.689967001Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:02:47.691519 containerd[1649]: time="2025-03-17T18:02:47.691427143Z" level=info msg="CreateContainer within sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:02:47.748146 containerd[1649]: time="2025-03-17T18:02:47.748094107Z" level=info msg="CreateContainer within sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9\"" Mar 17 18:02:47.748803 containerd[1649]: time="2025-03-17T18:02:47.748766354Z" level=info msg="StartContainer for \"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9\"" Mar 17 18:02:47.928356 containerd[1649]: time="2025-03-17T18:02:47.927192265Z" level=info msg="StartContainer for \"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9\" returns successfully" Mar 17 18:02:47.976927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9-rootfs.mount: Deactivated successfully. Mar 17 18:02:48.009838 containerd[1649]: time="2025-03-17T18:02:48.005120096Z" level=info msg="shim disconnected" id=eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9 namespace=k8s.io Mar 17 18:02:48.009838 containerd[1649]: time="2025-03-17T18:02:48.009829470Z" level=warning msg="cleaning up after shim disconnected" id=eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9 namespace=k8s.io Mar 17 18:02:48.009838 containerd[1649]: time="2025-03-17T18:02:48.009844129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:48.724160 containerd[1649]: time="2025-03-17T18:02:48.723885557Z" level=info msg="CreateContainer within sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:02:48.750962 containerd[1649]: time="2025-03-17T18:02:48.750890589Z" level=info msg="CreateContainer within sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91\"" Mar 17 18:02:48.757005 containerd[1649]: time="2025-03-17T18:02:48.754505800Z" level=info msg="StartContainer for \"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91\"" Mar 17 18:02:48.819960 containerd[1649]: time="2025-03-17T18:02:48.819932656Z" level=info msg="StartContainer for \"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91\" returns successfully" Mar 17 18:02:48.832346 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:02:48.833231 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:02:48.833307 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:02:48.842472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:02:48.857568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91-rootfs.mount: Deactivated successfully. Mar 17 18:02:48.862711 containerd[1649]: time="2025-03-17T18:02:48.862641950Z" level=info msg="shim disconnected" id=56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91 namespace=k8s.io Mar 17 18:02:48.862711 containerd[1649]: time="2025-03-17T18:02:48.862698817Z" level=warning msg="cleaning up after shim disconnected" id=56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91 namespace=k8s.io Mar 17 18:02:48.862711 containerd[1649]: time="2025-03-17T18:02:48.862708626Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:48.867887 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:02:49.732498 containerd[1649]: time="2025-03-17T18:02:49.732451892Z" level=info msg="CreateContainer within sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:02:49.769915 containerd[1649]: time="2025-03-17T18:02:49.769764682Z" level=info msg="CreateContainer within sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb\"" Mar 17 18:02:49.771347 containerd[1649]: time="2025-03-17T18:02:49.771192824Z" level=info msg="StartContainer for \"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb\"" Mar 17 18:02:49.864605 containerd[1649]: time="2025-03-17T18:02:49.864194170Z" level=info msg="StartContainer for \"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb\" returns successfully" Mar 17 18:02:49.921148 containerd[1649]: time="2025-03-17T18:02:49.921073531Z" level=info msg="shim disconnected" id=af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb namespace=k8s.io Mar 17 18:02:49.921148 containerd[1649]: time="2025-03-17T18:02:49.921136611Z" level=warning msg="cleaning up after shim disconnected" id=af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb namespace=k8s.io Mar 17 18:02:49.921148 containerd[1649]: time="2025-03-17T18:02:49.921145807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:50.141142 containerd[1649]: time="2025-03-17T18:02:50.140771092Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:50.141592 containerd[1649]: time="2025-03-17T18:02:50.141554558Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 17 18:02:50.142513 containerd[1649]: time="2025-03-17T18:02:50.142485151Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:02:50.144175 containerd[1649]: time="2025-03-17T18:02:50.143732883Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.453740574s" Mar 17 18:02:50.144175 containerd[1649]: time="2025-03-17T18:02:50.143765965Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:02:50.146177 containerd[1649]: time="2025-03-17T18:02:50.146145138Z" level=info msg="CreateContainer within sandbox \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:02:50.169597 containerd[1649]: time="2025-03-17T18:02:50.169432777Z" level=info msg="CreateContainer within sandbox \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970\"" Mar 17 18:02:50.170836 containerd[1649]: time="2025-03-17T18:02:50.170349054Z" level=info msg="StartContainer for \"98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970\"" Mar 17 18:02:50.232491 containerd[1649]: time="2025-03-17T18:02:50.232445518Z" level=info msg="StartContainer for \"98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970\" returns successfully" Mar 17 18:02:50.732176 containerd[1649]: time="2025-03-17T18:02:50.732132555Z" level=info msg="CreateContainer within sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:02:50.749257 containerd[1649]: time="2025-03-17T18:02:50.749220884Z" level=info msg="CreateContainer within sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8\"" Mar 17 18:02:50.753357 containerd[1649]: time="2025-03-17T18:02:50.749637679Z" level=info msg="StartContainer for \"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8\"" Mar 17 18:02:50.750296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb-rootfs.mount: Deactivated successfully. Mar 17 18:02:50.788651 kubelet[3108]: I0317 18:02:50.788100 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-d6klr" podStartSLOduration=2.227462447 podStartE2EDuration="9.788086035s" podCreationTimestamp="2025-03-17 18:02:41 +0000 UTC" firstStartedPulling="2025-03-17 18:02:42.583630686 +0000 UTC m=+17.027460552" lastFinishedPulling="2025-03-17 18:02:50.144254275 +0000 UTC m=+24.588084140" observedRunningTime="2025-03-17 18:02:50.748644408 +0000 UTC m=+25.192474283" watchObservedRunningTime="2025-03-17 18:02:50.788086035 +0000 UTC m=+25.231915900" Mar 17 18:02:50.864485 containerd[1649]: time="2025-03-17T18:02:50.864060549Z" level=info msg="StartContainer for \"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8\" returns successfully" Mar 17 18:02:50.893456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8-rootfs.mount: Deactivated successfully. Mar 17 18:02:50.921686 containerd[1649]: time="2025-03-17T18:02:50.921421290Z" level=info msg="shim disconnected" id=ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8 namespace=k8s.io Mar 17 18:02:50.921686 containerd[1649]: time="2025-03-17T18:02:50.921470843Z" level=warning msg="cleaning up after shim disconnected" id=ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8 namespace=k8s.io Mar 17 18:02:50.921686 containerd[1649]: time="2025-03-17T18:02:50.921478807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:02:51.742212 containerd[1649]: time="2025-03-17T18:02:51.742063247Z" level=info msg="CreateContainer within sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:02:51.759522 containerd[1649]: time="2025-03-17T18:02:51.758590757Z" level=info msg="CreateContainer within sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\"" Mar 17 18:02:51.762748 containerd[1649]: time="2025-03-17T18:02:51.761605738Z" level=info msg="StartContainer for \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\"" Mar 17 18:02:51.795119 systemd[1]: run-containerd-runc-k8s.io-5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965-runc.uam4Yl.mount: Deactivated successfully. Mar 17 18:02:51.846398 containerd[1649]: time="2025-03-17T18:02:51.846367630Z" level=info msg="StartContainer for \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\" returns successfully" Mar 17 18:02:52.061033 kubelet[3108]: I0317 18:02:52.060417 3108 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:02:52.095078 kubelet[3108]: I0317 18:02:52.091519 3108 topology_manager.go:215] "Topology Admit Handler" podUID="4d6b2097-31b0-4bcb-83a7-fe54ad391e3b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bm4s2" Mar 17 18:02:52.096575 kubelet[3108]: I0317 18:02:52.096553 3108 topology_manager.go:215] "Topology Admit Handler" podUID="c9c0468e-c4bd-4330-a53c-243c492a8144" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8nsms" Mar 17 18:02:52.131452 kubelet[3108]: I0317 18:02:52.131425 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d6b2097-31b0-4bcb-83a7-fe54ad391e3b-config-volume\") pod \"coredns-7db6d8ff4d-bm4s2\" (UID: \"4d6b2097-31b0-4bcb-83a7-fe54ad391e3b\") " pod="kube-system/coredns-7db6d8ff4d-bm4s2" Mar 17 18:02:52.131846 kubelet[3108]: I0317 18:02:52.131830 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-567d2\" (UniqueName: \"kubernetes.io/projected/4d6b2097-31b0-4bcb-83a7-fe54ad391e3b-kube-api-access-567d2\") pod \"coredns-7db6d8ff4d-bm4s2\" (UID: \"4d6b2097-31b0-4bcb-83a7-fe54ad391e3b\") " pod="kube-system/coredns-7db6d8ff4d-bm4s2" Mar 17 18:02:52.132123 kubelet[3108]: I0317 18:02:52.132108 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-246z9\" (UniqueName: \"kubernetes.io/projected/c9c0468e-c4bd-4330-a53c-243c492a8144-kube-api-access-246z9\") pod \"coredns-7db6d8ff4d-8nsms\" (UID: \"c9c0468e-c4bd-4330-a53c-243c492a8144\") " pod="kube-system/coredns-7db6d8ff4d-8nsms" Mar 17 18:02:52.132920 kubelet[3108]: I0317 18:02:52.132850 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9c0468e-c4bd-4330-a53c-243c492a8144-config-volume\") pod \"coredns-7db6d8ff4d-8nsms\" (UID: \"c9c0468e-c4bd-4330-a53c-243c492a8144\") " pod="kube-system/coredns-7db6d8ff4d-8nsms" Mar 17 18:02:52.416919 containerd[1649]: time="2025-03-17T18:02:52.416245723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bm4s2,Uid:4d6b2097-31b0-4bcb-83a7-fe54ad391e3b,Namespace:kube-system,Attempt:0,}" Mar 17 18:02:52.418730 containerd[1649]: time="2025-03-17T18:02:52.418507586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8nsms,Uid:c9c0468e-c4bd-4330-a53c-243c492a8144,Namespace:kube-system,Attempt:0,}" Mar 17 18:02:54.138117 systemd-networkd[1251]: cilium_host: Link UP Mar 17 18:02:54.143161 systemd-networkd[1251]: cilium_net: Link UP Mar 17 18:02:54.143171 systemd-networkd[1251]: cilium_net: Gained carrier Mar 17 18:02:54.143573 systemd-networkd[1251]: cilium_host: Gained carrier Mar 17 18:02:54.147524 systemd-networkd[1251]: cilium_host: Gained IPv6LL Mar 17 18:02:54.258393 systemd-networkd[1251]: cilium_vxlan: Link UP Mar 17 18:02:54.258400 systemd-networkd[1251]: cilium_vxlan: Gained carrier Mar 17 18:02:54.645354 kernel: NET: Registered PF_ALG protocol family Mar 17 18:02:54.774621 systemd-networkd[1251]: cilium_net: Gained IPv6LL Mar 17 18:02:55.341808 systemd-networkd[1251]: lxc_health: Link UP Mar 17 18:02:55.351433 systemd-networkd[1251]: lxc_health: Gained carrier Mar 17 18:02:55.413801 systemd-networkd[1251]: cilium_vxlan: Gained IPv6LL Mar 17 18:02:55.532414 systemd-networkd[1251]: lxcf1f44fe2f7c0: Link UP Mar 17 18:02:55.545385 kernel: eth0: renamed from tmpfbe31 Mar 17 18:02:55.549614 systemd-networkd[1251]: lxcf1f44fe2f7c0: Gained carrier Mar 17 18:02:55.554431 systemd-networkd[1251]: lxc83c04e87872a: Link UP Mar 17 18:02:55.568427 kernel: eth0: renamed from tmpbb335 Mar 17 18:02:55.573787 systemd-networkd[1251]: lxc83c04e87872a: Gained carrier Mar 17 18:02:56.135019 kubelet[3108]: I0317 18:02:56.134954 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hhnbf" podStartSLOduration=9.679272936 podStartE2EDuration="15.134939634s" podCreationTimestamp="2025-03-17 18:02:41 +0000 UTC" firstStartedPulling="2025-03-17 18:02:42.234143377 +0000 UTC m=+16.677973242" lastFinishedPulling="2025-03-17 18:02:47.689810075 +0000 UTC m=+22.133639940" observedRunningTime="2025-03-17 18:02:52.76110878 +0000 UTC m=+27.204938675" watchObservedRunningTime="2025-03-17 18:02:56.134939634 +0000 UTC m=+30.578769499" Mar 17 18:02:56.629640 systemd-networkd[1251]: lxc83c04e87872a: Gained IPv6LL Mar 17 18:02:56.694643 systemd-networkd[1251]: lxcf1f44fe2f7c0: Gained IPv6LL Mar 17 18:02:57.334178 systemd-networkd[1251]: lxc_health: Gained IPv6LL Mar 17 18:02:58.816650 containerd[1649]: time="2025-03-17T18:02:58.813656528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:02:58.816650 containerd[1649]: time="2025-03-17T18:02:58.813716350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:02:58.816650 containerd[1649]: time="2025-03-17T18:02:58.813729635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:58.816650 containerd[1649]: time="2025-03-17T18:02:58.814280392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:58.828398 containerd[1649]: time="2025-03-17T18:02:58.827744401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:02:58.828398 containerd[1649]: time="2025-03-17T18:02:58.827783084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:02:58.828398 containerd[1649]: time="2025-03-17T18:02:58.827800427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:58.828398 containerd[1649]: time="2025-03-17T18:02:58.827900135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:02:58.948745 containerd[1649]: time="2025-03-17T18:02:58.948698910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bm4s2,Uid:4d6b2097-31b0-4bcb-83a7-fe54ad391e3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb335f30a2fc2fc55c0d2b86781de6cf0db14b710a19b569a0adf3b78a1f09b0\"" Mar 17 18:02:58.954106 containerd[1649]: time="2025-03-17T18:02:58.953883415Z" level=info msg="CreateContainer within sandbox \"bb335f30a2fc2fc55c0d2b86781de6cf0db14b710a19b569a0adf3b78a1f09b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:02:58.962361 containerd[1649]: time="2025-03-17T18:02:58.962016312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8nsms,Uid:c9c0468e-c4bd-4330-a53c-243c492a8144,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbe31524c9ed7507e7b78579e18b0031f6cf0d6818e3d5a0e7966505ab8c2478\"" Mar 17 18:02:58.965532 containerd[1649]: time="2025-03-17T18:02:58.965464928Z" level=info msg="CreateContainer within sandbox \"fbe31524c9ed7507e7b78579e18b0031f6cf0d6818e3d5a0e7966505ab8c2478\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:02:58.978925 containerd[1649]: time="2025-03-17T18:02:58.978892879Z" level=info msg="CreateContainer within sandbox \"bb335f30a2fc2fc55c0d2b86781de6cf0db14b710a19b569a0adf3b78a1f09b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37153455d60551d5a8df8ecc23cd645dbb54eff5059d640dff06bba8def85c09\"" Mar 17 18:02:58.979262 containerd[1649]: time="2025-03-17T18:02:58.979230795Z" level=info msg="StartContainer for \"37153455d60551d5a8df8ecc23cd645dbb54eff5059d640dff06bba8def85c09\"" Mar 17 18:02:58.984812 containerd[1649]: time="2025-03-17T18:02:58.984729080Z" level=info msg="CreateContainer within sandbox \"fbe31524c9ed7507e7b78579e18b0031f6cf0d6818e3d5a0e7966505ab8c2478\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc02273fda03f661468cb467409f63d911014fba6f22383fbef2918c568dd199\"" Mar 17 18:02:58.986248 containerd[1649]: time="2025-03-17T18:02:58.985763648Z" level=info msg="StartContainer for \"bc02273fda03f661468cb467409f63d911014fba6f22383fbef2918c568dd199\"" Mar 17 18:02:59.051951 containerd[1649]: time="2025-03-17T18:02:59.051912771Z" level=info msg="StartContainer for \"bc02273fda03f661468cb467409f63d911014fba6f22383fbef2918c568dd199\" returns successfully" Mar 17 18:02:59.052064 containerd[1649]: time="2025-03-17T18:02:59.051973205Z" level=info msg="StartContainer for \"37153455d60551d5a8df8ecc23cd645dbb54eff5059d640dff06bba8def85c09\" returns successfully" Mar 17 18:02:59.787222 kubelet[3108]: I0317 18:02:59.787148 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bm4s2" podStartSLOduration=18.787131487 podStartE2EDuration="18.787131487s" podCreationTimestamp="2025-03-17 18:02:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:02:59.785853089 +0000 UTC m=+34.229682964" watchObservedRunningTime="2025-03-17 18:02:59.787131487 +0000 UTC m=+34.230961362" Mar 17 18:02:59.831382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562027613.mount: Deactivated successfully. Mar 17 18:03:03.077313 kubelet[3108]: I0317 18:03:03.077060 3108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:03:03.098229 kubelet[3108]: I0317 18:03:03.098128 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8nsms" podStartSLOduration=22.098099674 podStartE2EDuration="22.098099674s" podCreationTimestamp="2025-03-17 18:02:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:02:59.826075582 +0000 UTC m=+34.269905447" watchObservedRunningTime="2025-03-17 18:03:03.098099674 +0000 UTC m=+37.541929569" Mar 17 18:05:26.736598 systemd[1]: Started sshd@7-157.180.43.77:22-209.38.22.145:57116.service - OpenSSH per-connection server daemon (209.38.22.145:57116). Mar 17 18:05:28.042077 sshd[4490]: Invalid user from 209.38.22.145 port 57116 Mar 17 18:05:34.729167 sshd[4490]: Connection closed by invalid user 209.38.22.145 port 57116 [preauth] Mar 17 18:05:34.734908 systemd[1]: sshd@7-157.180.43.77:22-209.38.22.145:57116.service: Deactivated successfully. Mar 17 18:07:16.966550 systemd[1]: Started sshd@8-157.180.43.77:22-139.178.68.195:46250.service - OpenSSH per-connection server daemon (139.178.68.195:46250). Mar 17 18:07:17.958131 sshd[4506]: Accepted publickey for core from 139.178.68.195 port 46250 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:07:17.960371 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:07:17.969443 systemd-logind[1624]: New session 8 of user core. Mar 17 18:07:17.974982 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 18:07:19.049259 sshd[4509]: Connection closed by 139.178.68.195 port 46250 Mar 17 18:07:19.050544 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Mar 17 18:07:19.055456 systemd[1]: sshd@8-157.180.43.77:22-139.178.68.195:46250.service: Deactivated successfully. Mar 17 18:07:19.055502 systemd-logind[1624]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:07:19.059050 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:07:19.059967 systemd-logind[1624]: Removed session 8. Mar 17 18:07:24.223235 systemd[1]: Started sshd@9-157.180.43.77:22-139.178.68.195:46260.service - OpenSSH per-connection server daemon (139.178.68.195:46260). Mar 17 18:07:25.257001 sshd[4522]: Accepted publickey for core from 139.178.68.195 port 46260 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:07:25.259755 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:07:25.269629 systemd-logind[1624]: New session 9 of user core. Mar 17 18:07:25.277119 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 18:07:26.033791 sshd[4525]: Connection closed by 139.178.68.195 port 46260 Mar 17 18:07:26.035528 sshd-session[4522]: pam_unix(sshd:session): session closed for user core Mar 17 18:07:26.046034 systemd-logind[1624]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:07:26.046618 systemd[1]: sshd@9-157.180.43.77:22-139.178.68.195:46260.service: Deactivated successfully. Mar 17 18:07:26.054535 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:07:26.057386 systemd-logind[1624]: Removed session 9. Mar 17 18:07:31.196473 systemd[1]: Started sshd@10-157.180.43.77:22-139.178.68.195:44732.service - OpenSSH per-connection server daemon (139.178.68.195:44732). Mar 17 18:07:32.197620 sshd[4541]: Accepted publickey for core from 139.178.68.195 port 44732 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:07:32.199686 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:07:32.207261 systemd-logind[1624]: New session 10 of user core. Mar 17 18:07:32.214903 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 18:07:32.953124 sshd[4544]: Connection closed by 139.178.68.195 port 44732 Mar 17 18:07:32.954038 sshd-session[4541]: pam_unix(sshd:session): session closed for user core Mar 17 18:07:32.969554 systemd[1]: sshd@10-157.180.43.77:22-139.178.68.195:44732.service: Deactivated successfully. Mar 17 18:07:32.975988 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:07:32.978657 systemd-logind[1624]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:07:32.980652 systemd-logind[1624]: Removed session 10. Mar 17 18:07:33.117823 systemd[1]: Started sshd@11-157.180.43.77:22-139.178.68.195:44744.service - OpenSSH per-connection server daemon (139.178.68.195:44744). Mar 17 18:07:34.109950 sshd[4556]: Accepted publickey for core from 139.178.68.195 port 44744 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:07:34.112868 sshd-session[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:07:34.121019 systemd-logind[1624]: New session 11 of user core. Mar 17 18:07:34.130873 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 18:07:34.942215 sshd[4559]: Connection closed by 139.178.68.195 port 44744 Mar 17 18:07:34.943402 sshd-session[4556]: pam_unix(sshd:session): session closed for user core Mar 17 18:07:34.951133 systemd-logind[1624]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:07:34.952537 systemd[1]: sshd@11-157.180.43.77:22-139.178.68.195:44744.service: Deactivated successfully. Mar 17 18:07:34.960462 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:07:34.962615 systemd-logind[1624]: Removed session 11. Mar 17 18:07:35.108530 systemd[1]: Started sshd@12-157.180.43.77:22-139.178.68.195:44760.service - OpenSSH per-connection server daemon (139.178.68.195:44760). Mar 17 18:07:36.120155 sshd[4568]: Accepted publickey for core from 139.178.68.195 port 44760 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:07:36.122891 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:07:36.132492 systemd-logind[1624]: New session 12 of user core. Mar 17 18:07:36.140887 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 18:07:36.899648 sshd[4571]: Connection closed by 139.178.68.195 port 44760 Mar 17 18:07:36.900418 sshd-session[4568]: pam_unix(sshd:session): session closed for user core Mar 17 18:07:36.903618 systemd[1]: sshd@12-157.180.43.77:22-139.178.68.195:44760.service: Deactivated successfully. Mar 17 18:07:36.909478 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:07:36.910504 systemd-logind[1624]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:07:36.911698 systemd-logind[1624]: Removed session 12. Mar 17 18:07:42.067134 systemd[1]: Started sshd@13-157.180.43.77:22-139.178.68.195:34658.service - OpenSSH per-connection server daemon (139.178.68.195:34658). Mar 17 18:07:43.070788 sshd[4582]: Accepted publickey for core from 139.178.68.195 port 34658 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:07:43.072278 sshd-session[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:07:43.076396 systemd-logind[1624]: New session 13 of user core. Mar 17 18:07:43.081564 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 18:07:43.558202 update_engine[1625]: I20250317 18:07:43.558125 1625 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 18:07:43.558202 update_engine[1625]: I20250317 18:07:43.558192 1625 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 18:07:43.560760 update_engine[1625]: I20250317 18:07:43.560722 1625 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 18:07:43.561311 update_engine[1625]: I20250317 18:07:43.561275 1625 omaha_request_params.cc:62] Current group set to stable Mar 17 18:07:43.561640 update_engine[1625]: I20250317 18:07:43.561462 1625 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 18:07:43.561640 update_engine[1625]: I20250317 18:07:43.561486 1625 update_attempter.cc:643] Scheduling an action processor start. Mar 17 18:07:43.561640 update_engine[1625]: I20250317 18:07:43.561509 1625 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:07:43.561640 update_engine[1625]: I20250317 18:07:43.561579 1625 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 18:07:43.561785 update_engine[1625]: I20250317 18:07:43.561692 1625 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 18:07:43.561785 update_engine[1625]: I20250317 18:07:43.561708 1625 omaha_request_action.cc:272] Request: Mar 17 18:07:43.561785 update_engine[1625]: Mar 17 18:07:43.561785 update_engine[1625]: Mar 17 18:07:43.561785 update_engine[1625]: Mar 17 18:07:43.561785 update_engine[1625]: Mar 17 18:07:43.561785 update_engine[1625]: Mar 17 18:07:43.561785 update_engine[1625]: Mar 17 18:07:43.561785 update_engine[1625]: Mar 17 18:07:43.561785 update_engine[1625]: Mar 17 18:07:43.561785 update_engine[1625]: I20250317 18:07:43.561719 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:07:43.574620 locksmithd[1671]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 18:07:43.575067 update_engine[1625]: I20250317 18:07:43.574848 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:07:43.575273 update_engine[1625]: I20250317 18:07:43.575208 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:07:43.576038 update_engine[1625]: E20250317 18:07:43.575992 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:07:43.576101 update_engine[1625]: I20250317 18:07:43.576072 1625 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 18:07:43.813068 sshd[4587]: Connection closed by 139.178.68.195 port 34658 Mar 17 18:07:43.813872 sshd-session[4582]: pam_unix(sshd:session): session closed for user core Mar 17 18:07:43.822021 systemd[1]: sshd@13-157.180.43.77:22-139.178.68.195:34658.service: Deactivated successfully. Mar 17 18:07:43.822358 systemd-logind[1624]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:07:43.826350 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:07:43.827515 systemd-logind[1624]: Removed session 13. Mar 17 18:07:43.978738 systemd[1]: Started sshd@14-157.180.43.77:22-139.178.68.195:34662.service - OpenSSH per-connection server daemon (139.178.68.195:34662). Mar 17 18:07:44.985301 sshd[4598]: Accepted publickey for core from 139.178.68.195 port 34662 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:07:44.987950 sshd-session[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:07:44.996711 systemd-logind[1624]: New session 14 of user core. Mar 17 18:07:45.006025 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 18:07:46.003351 sshd[4601]: Connection closed by 139.178.68.195 port 34662 Mar 17 18:07:46.005247 sshd-session[4598]: pam_unix(sshd:session): session closed for user core Mar 17 18:07:46.020633 systemd[1]: sshd@14-157.180.43.77:22-139.178.68.195:34662.service: Deactivated successfully. Mar 17 18:07:46.027437 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:07:46.028963 systemd-logind[1624]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:07:46.031115 systemd-logind[1624]: Removed session 14. Mar 17 18:07:46.170919 systemd[1]: Started sshd@15-157.180.43.77:22-139.178.68.195:42294.service - OpenSSH per-connection server daemon (139.178.68.195:42294). Mar 17 18:07:47.174202 sshd[4610]: Accepted publickey for core from 139.178.68.195 port 42294 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:07:47.177041 sshd-session[4610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:07:47.185825 systemd-logind[1624]: New session 15 of user core. Mar 17 18:07:47.190821 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 18:07:49.370131 sshd[4613]: Connection closed by 139.178.68.195 port 42294 Mar 17 18:07:49.372302 sshd-session[4610]: pam_unix(sshd:session): session closed for user core Mar 17 18:07:49.380956 systemd[1]: sshd@15-157.180.43.77:22-139.178.68.195:42294.service: Deactivated successfully. Mar 17 18:07:49.388995 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:07:49.391807 systemd-logind[1624]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:07:49.394624 systemd-logind[1624]: Removed session 15. Mar 17 18:07:49.536922 systemd[1]: Started sshd@16-157.180.43.77:22-139.178.68.195:42302.service - OpenSSH per-connection server daemon (139.178.68.195:42302). Mar 17 18:07:50.557144 sshd[4630]: Accepted publickey for core from 139.178.68.195 port 42302 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:07:50.559922 sshd-session[4630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:07:50.569400 systemd-logind[1624]: New session 16 of user core. Mar 17 18:07:50.573831 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 18:07:51.427079 sshd[4633]: Connection closed by 139.178.68.195 port 42302 Mar 17 18:07:51.427818 sshd-session[4630]: pam_unix(sshd:session): session closed for user core Mar 17 18:07:51.431950 systemd[1]: sshd@16-157.180.43.77:22-139.178.68.195:42302.service: Deactivated successfully. Mar 17 18:07:51.437779 systemd-logind[1624]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:07:51.438622 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:07:51.440230 systemd-logind[1624]: Removed session 16. Mar 17 18:07:51.591264 systemd[1]: Started sshd@17-157.180.43.77:22-139.178.68.195:42312.service - OpenSSH per-connection server daemon (139.178.68.195:42312). Mar 17 18:07:52.560134 sshd[4642]: Accepted publickey for core from 139.178.68.195 port 42312 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:07:52.561878 sshd-session[4642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:07:52.566499 systemd-logind[1624]: New session 17 of user core. Mar 17 18:07:52.571601 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 18:07:53.289134 sshd[4645]: Connection closed by 139.178.68.195 port 42312 Mar 17 18:07:53.289853 sshd-session[4642]: pam_unix(sshd:session): session closed for user core Mar 17 18:07:53.292851 systemd[1]: sshd@17-157.180.43.77:22-139.178.68.195:42312.service: Deactivated successfully. Mar 17 18:07:53.297256 systemd-logind[1624]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:07:53.297617 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:07:53.298991 systemd-logind[1624]: Removed session 17. Mar 17 18:07:53.528620 update_engine[1625]: I20250317 18:07:53.528535 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:07:53.529096 update_engine[1625]: I20250317 18:07:53.528826 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:07:53.529155 update_engine[1625]: I20250317 18:07:53.529088 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:07:53.529481 update_engine[1625]: E20250317 18:07:53.529441 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:07:53.529559 update_engine[1625]: I20250317 18:07:53.529495 1625 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 18:07:58.485554 systemd[1]: Started sshd@18-157.180.43.77:22-139.178.68.195:35602.service - OpenSSH per-connection server daemon (139.178.68.195:35602). Mar 17 18:07:59.560419 sshd[4659]: Accepted publickey for core from 139.178.68.195 port 35602 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:07:59.563005 sshd-session[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:07:59.571473 systemd-logind[1624]: New session 18 of user core. Mar 17 18:07:59.577940 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 18:08:00.362093 sshd[4662]: Connection closed by 139.178.68.195 port 35602 Mar 17 18:08:00.363152 sshd-session[4659]: pam_unix(sshd:session): session closed for user core Mar 17 18:08:00.368627 systemd[1]: sshd@18-157.180.43.77:22-139.178.68.195:35602.service: Deactivated successfully. Mar 17 18:08:00.377563 systemd-logind[1624]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:08:00.378591 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:08:00.381010 systemd-logind[1624]: Removed session 18. Mar 17 18:08:03.529846 update_engine[1625]: I20250317 18:08:03.529673 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:08:03.530513 update_engine[1625]: I20250317 18:08:03.530072 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:08:03.530674 update_engine[1625]: I20250317 18:08:03.530573 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:08:03.530975 update_engine[1625]: E20250317 18:08:03.530906 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:08:03.531040 update_engine[1625]: I20250317 18:08:03.531003 1625 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 18:08:05.514722 systemd[1]: Started sshd@19-157.180.43.77:22-139.178.68.195:35608.service - OpenSSH per-connection server daemon (139.178.68.195:35608). Mar 17 18:08:06.504542 sshd[4673]: Accepted publickey for core from 139.178.68.195 port 35608 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:08:06.506207 sshd-session[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:08:06.510690 systemd-logind[1624]: New session 19 of user core. Mar 17 18:08:06.515632 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 18:08:07.275664 sshd[4676]: Connection closed by 139.178.68.195 port 35608 Mar 17 18:08:07.276933 sshd-session[4673]: pam_unix(sshd:session): session closed for user core Mar 17 18:08:07.282928 systemd[1]: sshd@19-157.180.43.77:22-139.178.68.195:35608.service: Deactivated successfully. Mar 17 18:08:07.291403 systemd-logind[1624]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:08:07.292608 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:08:07.295861 systemd-logind[1624]: Removed session 19. Mar 17 18:08:07.445585 systemd[1]: Started sshd@20-157.180.43.77:22-139.178.68.195:51248.service - OpenSSH per-connection server daemon (139.178.68.195:51248). Mar 17 18:08:08.443546 sshd[4687]: Accepted publickey for core from 139.178.68.195 port 51248 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:08:08.446126 sshd-session[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:08:08.454645 systemd-logind[1624]: New session 20 of user core. Mar 17 18:08:08.462006 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 18:08:10.323579 containerd[1649]: time="2025-03-17T18:08:10.323467846Z" level=info msg="StopContainer for \"98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970\" with timeout 30 (s)" Mar 17 18:08:10.330208 containerd[1649]: time="2025-03-17T18:08:10.329477186Z" level=info msg="Stop container \"98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970\" with signal terminated" Mar 17 18:08:10.388877 containerd[1649]: time="2025-03-17T18:08:10.388560596Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:08:10.391753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970-rootfs.mount: Deactivated successfully. Mar 17 18:08:10.398875 containerd[1649]: time="2025-03-17T18:08:10.398590269Z" level=info msg="shim disconnected" id=98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970 namespace=k8s.io Mar 17 18:08:10.398875 containerd[1649]: time="2025-03-17T18:08:10.398869811Z" level=warning msg="cleaning up after shim disconnected" id=98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970 namespace=k8s.io Mar 17 18:08:10.398875 containerd[1649]: time="2025-03-17T18:08:10.398880721Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:08:10.400809 containerd[1649]: time="2025-03-17T18:08:10.400652224Z" level=info msg="StopContainer for \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\" with timeout 2 (s)" Mar 17 18:08:10.401083 containerd[1649]: time="2025-03-17T18:08:10.401020204Z" level=info msg="Stop container \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\" with signal terminated" Mar 17 18:08:10.411151 systemd-networkd[1251]: lxc_health: Link DOWN Mar 17 18:08:10.411159 systemd-networkd[1251]: lxc_health: Lost carrier Mar 17 18:08:10.427524 containerd[1649]: time="2025-03-17T18:08:10.427092094Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:08:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:08:10.433207 containerd[1649]: time="2025-03-17T18:08:10.433184762Z" level=info msg="StopContainer for \"98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970\" returns successfully" Mar 17 18:08:10.435967 containerd[1649]: time="2025-03-17T18:08:10.435658651Z" level=info msg="StopPodSandbox for \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\"" Mar 17 18:08:10.437689 containerd[1649]: time="2025-03-17T18:08:10.437599866Z" level=info msg="Container to stop \"98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:08:10.443466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa-shm.mount: Deactivated successfully. Mar 17 18:08:10.471813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965-rootfs.mount: Deactivated successfully. Mar 17 18:08:10.484908 containerd[1649]: time="2025-03-17T18:08:10.482737510Z" level=info msg="shim disconnected" id=5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965 namespace=k8s.io Mar 17 18:08:10.484908 containerd[1649]: time="2025-03-17T18:08:10.482889669Z" level=warning msg="cleaning up after shim disconnected" id=5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965 namespace=k8s.io Mar 17 18:08:10.484908 containerd[1649]: time="2025-03-17T18:08:10.482898315Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:08:10.492984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa-rootfs.mount: Deactivated successfully. Mar 17 18:08:10.496908 containerd[1649]: time="2025-03-17T18:08:10.496792334Z" level=info msg="shim disconnected" id=3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa namespace=k8s.io Mar 17 18:08:10.496908 containerd[1649]: time="2025-03-17T18:08:10.496898576Z" level=warning msg="cleaning up after shim disconnected" id=3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa namespace=k8s.io Mar 17 18:08:10.496908 containerd[1649]: time="2025-03-17T18:08:10.496907032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:08:10.504771 containerd[1649]: time="2025-03-17T18:08:10.504684277Z" level=info msg="StopContainer for \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\" returns successfully" Mar 17 18:08:10.505030 containerd[1649]: time="2025-03-17T18:08:10.505006099Z" level=info msg="StopPodSandbox for \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\"" Mar 17 18:08:10.505072 containerd[1649]: time="2025-03-17T18:08:10.505038371Z" level=info msg="Container to stop \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:08:10.505072 containerd[1649]: time="2025-03-17T18:08:10.505068118Z" level=info msg="Container to stop \"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:08:10.505141 containerd[1649]: time="2025-03-17T18:08:10.505076013Z" level=info msg="Container to stop \"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:08:10.505141 containerd[1649]: time="2025-03-17T18:08:10.505084218Z" level=info msg="Container to stop \"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:08:10.505141 containerd[1649]: time="2025-03-17T18:08:10.505093036Z" level=info msg="Container to stop \"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:08:10.516518 containerd[1649]: time="2025-03-17T18:08:10.516231899Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:08:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:08:10.518690 containerd[1649]: time="2025-03-17T18:08:10.518579438Z" level=info msg="TearDown network for sandbox \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\" successfully" Mar 17 18:08:10.518690 containerd[1649]: time="2025-03-17T18:08:10.518628992Z" level=info msg="StopPodSandbox for \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\" returns successfully" Mar 17 18:08:10.541346 containerd[1649]: time="2025-03-17T18:08:10.541285279Z" level=info msg="shim disconnected" id=ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742 namespace=k8s.io Mar 17 18:08:10.541346 containerd[1649]: time="2025-03-17T18:08:10.541342880Z" level=warning msg="cleaning up after shim disconnected" id=ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742 namespace=k8s.io Mar 17 18:08:10.541509 containerd[1649]: time="2025-03-17T18:08:10.541352678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:08:10.552737 containerd[1649]: time="2025-03-17T18:08:10.552688718Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:08:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:08:10.553936 containerd[1649]: time="2025-03-17T18:08:10.553912758Z" level=info msg="TearDown network for sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" successfully" Mar 17 18:08:10.553936 containerd[1649]: time="2025-03-17T18:08:10.553931493Z" level=info msg="StopPodSandbox for \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" returns successfully" Mar 17 18:08:10.645773 kubelet[3108]: I0317 18:08:10.645512 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-cgroup\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.645773 kubelet[3108]: I0317 18:08:10.645557 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49xct\" (UniqueName: \"kubernetes.io/projected/0c631824-e69f-4681-b0f5-d67417577ed5-kube-api-access-49xct\") pod \"0c631824-e69f-4681-b0f5-d67417577ed5\" (UID: \"0c631824-e69f-4681-b0f5-d67417577ed5\") " Mar 17 18:08:10.645773 kubelet[3108]: I0317 18:08:10.645579 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-config-path\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.645773 kubelet[3108]: I0317 18:08:10.645595 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-host-proc-sys-net\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.645773 kubelet[3108]: I0317 18:08:10.645610 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-run\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.645773 kubelet[3108]: I0317 18:08:10.645624 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-lib-modules\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.646735 kubelet[3108]: I0317 18:08:10.645658 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld5fq\" (UniqueName: \"kubernetes.io/projected/beb5c595-6ad8-4c10-bd42-4ea1f075736d-kube-api-access-ld5fq\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.646735 kubelet[3108]: I0317 18:08:10.645714 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/beb5c595-6ad8-4c10-bd42-4ea1f075736d-hubble-tls\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.646735 kubelet[3108]: I0317 18:08:10.646306 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-hostproc\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.646735 kubelet[3108]: I0317 18:08:10.646345 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-host-proc-sys-kernel\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.646735 kubelet[3108]: I0317 18:08:10.646367 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c631824-e69f-4681-b0f5-d67417577ed5-cilium-config-path\") pod \"0c631824-e69f-4681-b0f5-d67417577ed5\" (UID: \"0c631824-e69f-4681-b0f5-d67417577ed5\") " Mar 17 18:08:10.646735 kubelet[3108]: I0317 18:08:10.646385 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-bpf-maps\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.647061 kubelet[3108]: I0317 18:08:10.646399 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-xtables-lock\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.647061 kubelet[3108]: I0317 18:08:10.646416 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/beb5c595-6ad8-4c10-bd42-4ea1f075736d-clustermesh-secrets\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.647061 kubelet[3108]: I0317 18:08:10.646431 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-etc-cni-netd\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.647061 kubelet[3108]: I0317 18:08:10.646445 3108 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cni-path\") pod \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\" (UID: \"beb5c595-6ad8-4c10-bd42-4ea1f075736d\") " Mar 17 18:08:10.658452 kubelet[3108]: I0317 18:08:10.655843 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:08:10.658452 kubelet[3108]: I0317 18:08:10.658439 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cni-path" (OuterVolumeSpecName: "cni-path") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:08:10.658610 kubelet[3108]: I0317 18:08:10.658469 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-hostproc" (OuterVolumeSpecName: "hostproc") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:08:10.666487 kubelet[3108]: I0317 18:08:10.665624 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:08:10.666487 kubelet[3108]: I0317 18:08:10.665686 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:08:10.666487 kubelet[3108]: I0317 18:08:10.665710 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:08:10.666487 kubelet[3108]: I0317 18:08:10.665725 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:08:10.668515 kubelet[3108]: I0317 18:08:10.668288 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c631824-e69f-4681-b0f5-d67417577ed5-kube-api-access-49xct" (OuterVolumeSpecName: "kube-api-access-49xct") pod "0c631824-e69f-4681-b0f5-d67417577ed5" (UID: "0c631824-e69f-4681-b0f5-d67417577ed5"). InnerVolumeSpecName "kube-api-access-49xct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:08:10.668805 kubelet[3108]: I0317 18:08:10.668768 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:08:10.671649 kubelet[3108]: I0317 18:08:10.671607 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb5c595-6ad8-4c10-bd42-4ea1f075736d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:08:10.671740 kubelet[3108]: I0317 18:08:10.671722 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb5c595-6ad8-4c10-bd42-4ea1f075736d-kube-api-access-ld5fq" (OuterVolumeSpecName: "kube-api-access-ld5fq") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "kube-api-access-ld5fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:08:10.671787 kubelet[3108]: I0317 18:08:10.671749 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:08:10.671787 kubelet[3108]: I0317 18:08:10.671766 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:08:10.673989 kubelet[3108]: I0317 18:08:10.673955 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beb5c595-6ad8-4c10-bd42-4ea1f075736d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:08:10.674060 kubelet[3108]: I0317 18:08:10.674000 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "beb5c595-6ad8-4c10-bd42-4ea1f075736d" (UID: "beb5c595-6ad8-4c10-bd42-4ea1f075736d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:08:10.674965 kubelet[3108]: I0317 18:08:10.674917 3108 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c631824-e69f-4681-b0f5-d67417577ed5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0c631824-e69f-4681-b0f5-d67417577ed5" (UID: "0c631824-e69f-4681-b0f5-d67417577ed5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:08:10.747343 kubelet[3108]: I0317 18:08:10.747250 3108 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/beb5c595-6ad8-4c10-bd42-4ea1f075736d-hubble-tls\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750568 kubelet[3108]: I0317 18:08:10.747276 3108 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-hostproc\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750568 kubelet[3108]: I0317 18:08:10.747997 3108 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-host-proc-sys-kernel\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750568 kubelet[3108]: I0317 18:08:10.748025 3108 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c631824-e69f-4681-b0f5-d67417577ed5-cilium-config-path\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750568 kubelet[3108]: I0317 18:08:10.748247 3108 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-bpf-maps\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750568 kubelet[3108]: I0317 18:08:10.748267 3108 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-xtables-lock\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750568 kubelet[3108]: I0317 18:08:10.748278 3108 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/beb5c595-6ad8-4c10-bd42-4ea1f075736d-clustermesh-secrets\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750568 kubelet[3108]: I0317 18:08:10.748287 3108 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-etc-cni-netd\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750568 kubelet[3108]: I0317 18:08:10.748308 3108 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cni-path\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750993 kubelet[3108]: I0317 18:08:10.748331 3108 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-cgroup\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750993 kubelet[3108]: I0317 18:08:10.748340 3108 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-49xct\" (UniqueName: \"kubernetes.io/projected/0c631824-e69f-4681-b0f5-d67417577ed5-kube-api-access-49xct\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750993 kubelet[3108]: I0317 18:08:10.748348 3108 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-config-path\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750993 kubelet[3108]: I0317 18:08:10.748360 3108 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-host-proc-sys-net\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750993 kubelet[3108]: I0317 18:08:10.748369 3108 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-cilium-run\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750993 kubelet[3108]: I0317 18:08:10.748376 3108 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beb5c595-6ad8-4c10-bd42-4ea1f075736d-lib-modules\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.750993 kubelet[3108]: I0317 18:08:10.748388 3108 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ld5fq\" (UniqueName: \"kubernetes.io/projected/beb5c595-6ad8-4c10-bd42-4ea1f075736d-kube-api-access-ld5fq\") on node \"ci-4152-2-2-2-c2b93240d2\" DevicePath \"\"" Mar 17 18:08:10.804842 kubelet[3108]: E0317 18:08:10.804755 3108 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:08:11.363526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742-rootfs.mount: Deactivated successfully. Mar 17 18:08:11.363870 systemd[1]: var-lib-kubelet-pods-0c631824\x2de69f\x2d4681\x2db0f5\x2dd67417577ed5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d49xct.mount: Deactivated successfully. Mar 17 18:08:11.364159 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742-shm.mount: Deactivated successfully. Mar 17 18:08:11.364447 systemd[1]: var-lib-kubelet-pods-beb5c595\x2d6ad8\x2d4c10\x2dbd42\x2d4ea1f075736d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dld5fq.mount: Deactivated successfully. Mar 17 18:08:11.364729 systemd[1]: var-lib-kubelet-pods-beb5c595\x2d6ad8\x2d4c10\x2dbd42\x2d4ea1f075736d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:08:11.364986 systemd[1]: var-lib-kubelet-pods-beb5c595\x2d6ad8\x2d4c10\x2dbd42\x2d4ea1f075736d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:08:11.494788 kubelet[3108]: I0317 18:08:11.494733 3108 scope.go:117] "RemoveContainer" containerID="5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965" Mar 17 18:08:11.511834 containerd[1649]: time="2025-03-17T18:08:11.511665993Z" level=info msg="RemoveContainer for \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\"" Mar 17 18:08:11.523981 containerd[1649]: time="2025-03-17T18:08:11.523629453Z" level=info msg="RemoveContainer for \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\" returns successfully" Mar 17 18:08:11.524374 kubelet[3108]: I0317 18:08:11.524243 3108 scope.go:117] "RemoveContainer" containerID="ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8" Mar 17 18:08:11.526096 containerd[1649]: time="2025-03-17T18:08:11.525841164Z" level=info msg="RemoveContainer for \"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8\"" Mar 17 18:08:11.530424 containerd[1649]: time="2025-03-17T18:08:11.530092336Z" level=info msg="RemoveContainer for \"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8\" returns successfully" Mar 17 18:08:11.530483 kubelet[3108]: I0317 18:08:11.530263 3108 scope.go:117] "RemoveContainer" containerID="af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb" Mar 17 18:08:11.532506 containerd[1649]: time="2025-03-17T18:08:11.532257457Z" level=info msg="RemoveContainer for \"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb\"" Mar 17 18:08:11.535606 containerd[1649]: time="2025-03-17T18:08:11.535500930Z" level=info msg="RemoveContainer for \"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb\" returns successfully" Mar 17 18:08:11.536250 kubelet[3108]: I0317 18:08:11.535894 3108 scope.go:117] "RemoveContainer" containerID="56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91" Mar 17 18:08:11.537437 containerd[1649]: time="2025-03-17T18:08:11.537310506Z" level=info msg="RemoveContainer for \"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91\"" Mar 17 18:08:11.541359 containerd[1649]: time="2025-03-17T18:08:11.541305449Z" level=info msg="RemoveContainer for \"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91\" returns successfully" Mar 17 18:08:11.541627 kubelet[3108]: I0317 18:08:11.541562 3108 scope.go:117] "RemoveContainer" containerID="eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9" Mar 17 18:08:11.542764 containerd[1649]: time="2025-03-17T18:08:11.542670317Z" level=info msg="RemoveContainer for \"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9\"" Mar 17 18:08:11.545914 containerd[1649]: time="2025-03-17T18:08:11.545876269Z" level=info msg="RemoveContainer for \"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9\" returns successfully" Mar 17 18:08:11.546081 kubelet[3108]: I0317 18:08:11.545995 3108 scope.go:117] "RemoveContainer" containerID="5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965" Mar 17 18:08:11.546451 containerd[1649]: time="2025-03-17T18:08:11.546400277Z" level=error msg="ContainerStatus for \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\": not found" Mar 17 18:08:11.556204 kubelet[3108]: E0317 18:08:11.556154 3108 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\": not found" containerID="5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965" Mar 17 18:08:11.562881 kubelet[3108]: I0317 18:08:11.556218 3108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965"} err="failed to get container status \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\": rpc error: code = NotFound desc = an error occurred when try to find container \"5385e584b600f14ffaf7e88aa2210d02001ba4d6b9cb4edc898f60fd6bf30965\": not found" Mar 17 18:08:11.562881 kubelet[3108]: I0317 18:08:11.562880 3108 scope.go:117] "RemoveContainer" containerID="ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8" Mar 17 18:08:11.563166 containerd[1649]: time="2025-03-17T18:08:11.563096547Z" level=error msg="ContainerStatus for \"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8\": not found" Mar 17 18:08:11.563305 kubelet[3108]: E0317 18:08:11.563253 3108 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8\": not found" containerID="ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8" Mar 17 18:08:11.563305 kubelet[3108]: I0317 18:08:11.563287 3108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8"} err="failed to get container status \"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce362eafc0fe1fd3b633c38017e3d6f3f2763e5f94afad495572e4b0ee70b7c8\": not found" Mar 17 18:08:11.563379 kubelet[3108]: I0317 18:08:11.563306 3108 scope.go:117] "RemoveContainer" containerID="af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb" Mar 17 18:08:11.563715 containerd[1649]: time="2025-03-17T18:08:11.563632367Z" level=error msg="ContainerStatus for \"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb\": not found" Mar 17 18:08:11.563978 kubelet[3108]: E0317 18:08:11.563927 3108 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb\": not found" containerID="af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb" Mar 17 18:08:11.563978 kubelet[3108]: I0317 18:08:11.563959 3108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb"} err="failed to get container status \"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"af4b9e60368b3447712c7825c2c894fcaba91ba3f0ca77a33bc2b2e8bce5c3bb\": not found" Mar 17 18:08:11.563978 kubelet[3108]: I0317 18:08:11.563985 3108 scope.go:117] "RemoveContainer" containerID="56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91" Mar 17 18:08:11.564186 containerd[1649]: time="2025-03-17T18:08:11.564150563Z" level=error msg="ContainerStatus for \"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91\": not found" Mar 17 18:08:11.564336 kubelet[3108]: E0317 18:08:11.564284 3108 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91\": not found" containerID="56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91" Mar 17 18:08:11.564336 kubelet[3108]: I0317 18:08:11.564312 3108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91"} err="failed to get container status \"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91\": rpc error: code = NotFound desc = an error occurred when try to find container \"56b3834f57e365ac5a888dea3db8e8bf92bde68e739b0bfbe04a50d7f60ede91\": not found" Mar 17 18:08:11.564438 kubelet[3108]: I0317 18:08:11.564346 3108 scope.go:117] "RemoveContainer" containerID="eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9" Mar 17 18:08:11.564551 containerd[1649]: time="2025-03-17T18:08:11.564480050Z" level=error msg="ContainerStatus for \"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9\": not found" Mar 17 18:08:11.564667 kubelet[3108]: E0317 18:08:11.564617 3108 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9\": not found" containerID="eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9" Mar 17 18:08:11.564782 kubelet[3108]: I0317 18:08:11.564741 3108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9"} err="failed to get container status \"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"eec8e815253a7ba724300ea11ca8a92731979df075b851a445087690b83981b9\": not found" Mar 17 18:08:11.564782 kubelet[3108]: I0317 18:08:11.564763 3108 scope.go:117] "RemoveContainer" containerID="98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970" Mar 17 18:08:11.566183 containerd[1649]: time="2025-03-17T18:08:11.566081959Z" level=info msg="RemoveContainer for \"98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970\"" Mar 17 18:08:11.570123 containerd[1649]: time="2025-03-17T18:08:11.570088435Z" level=info msg="RemoveContainer for \"98fa1728cb3426ab70f7d192391031460df534b6080b649770b27370d5163970\" returns successfully" Mar 17 18:08:11.638946 kubelet[3108]: I0317 18:08:11.638807 3108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c631824-e69f-4681-b0f5-d67417577ed5" path="/var/lib/kubelet/pods/0c631824-e69f-4681-b0f5-d67417577ed5/volumes" Mar 17 18:08:11.639837 kubelet[3108]: I0317 18:08:11.639468 3108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beb5c595-6ad8-4c10-bd42-4ea1f075736d" path="/var/lib/kubelet/pods/beb5c595-6ad8-4c10-bd42-4ea1f075736d/volumes" Mar 17 18:08:12.417611 sshd[4690]: Connection closed by 139.178.68.195 port 51248 Mar 17 18:08:12.418800 sshd-session[4687]: pam_unix(sshd:session): session closed for user core Mar 17 18:08:12.428736 systemd[1]: sshd@20-157.180.43.77:22-139.178.68.195:51248.service: Deactivated successfully. Mar 17 18:08:12.433495 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:08:12.435620 systemd-logind[1624]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:08:12.437289 systemd-logind[1624]: Removed session 20. Mar 17 18:08:12.585644 systemd[1]: Started sshd@21-157.180.43.77:22-139.178.68.195:51264.service - OpenSSH per-connection server daemon (139.178.68.195:51264). Mar 17 18:08:13.069361 kubelet[3108]: I0317 18:08:13.068613 3108 setters.go:580] "Node became not ready" node="ci-4152-2-2-2-c2b93240d2" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:08:13Z","lastTransitionTime":"2025-03-17T18:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:08:13.529048 update_engine[1625]: I20250317 18:08:13.528946 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:08:13.529925 update_engine[1625]: I20250317 18:08:13.529301 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:08:13.529925 update_engine[1625]: I20250317 18:08:13.529690 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:08:13.530148 update_engine[1625]: E20250317 18:08:13.530082 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:08:13.530222 update_engine[1625]: I20250317 18:08:13.530160 1625 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:08:13.530222 update_engine[1625]: I20250317 18:08:13.530179 1625 omaha_request_action.cc:617] Omaha request response: Mar 17 18:08:13.531205 update_engine[1625]: E20250317 18:08:13.530301 1625 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 17 18:08:13.531205 update_engine[1625]: I20250317 18:08:13.530375 1625 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 18:08:13.531205 update_engine[1625]: I20250317 18:08:13.530390 1625 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:08:13.531205 update_engine[1625]: I20250317 18:08:13.530402 1625 update_attempter.cc:306] Processing Done. Mar 17 18:08:13.531205 update_engine[1625]: E20250317 18:08:13.530428 1625 update_attempter.cc:619] Update failed. Mar 17 18:08:13.534037 update_engine[1625]: I20250317 18:08:13.533547 1625 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 18:08:13.534037 update_engine[1625]: I20250317 18:08:13.533591 1625 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 18:08:13.534037 update_engine[1625]: I20250317 18:08:13.533608 1625 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 18:08:13.534037 update_engine[1625]: I20250317 18:08:13.533735 1625 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:08:13.534037 update_engine[1625]: I20250317 18:08:13.533772 1625 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 18:08:13.534037 update_engine[1625]: I20250317 18:08:13.533785 1625 omaha_request_action.cc:272] Request: Mar 17 18:08:13.534037 update_engine[1625]: Mar 17 18:08:13.534037 update_engine[1625]: Mar 17 18:08:13.534037 update_engine[1625]: Mar 17 18:08:13.534037 update_engine[1625]: Mar 17 18:08:13.534037 update_engine[1625]: Mar 17 18:08:13.534037 update_engine[1625]: Mar 17 18:08:13.534037 update_engine[1625]: I20250317 18:08:13.533799 1625 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:08:13.534658 update_engine[1625]: I20250317 18:08:13.534080 1625 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:08:13.534658 update_engine[1625]: I20250317 18:08:13.534483 1625 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:08:13.535078 update_engine[1625]: E20250317 18:08:13.534883 1625 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:08:13.535078 update_engine[1625]: I20250317 18:08:13.534951 1625 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:08:13.535078 update_engine[1625]: I20250317 18:08:13.534967 1625 omaha_request_action.cc:617] Omaha request response: Mar 17 18:08:13.535078 update_engine[1625]: I20250317 18:08:13.534981 1625 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:08:13.535078 update_engine[1625]: I20250317 18:08:13.534994 1625 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:08:13.535078 update_engine[1625]: I20250317 18:08:13.535006 1625 update_attempter.cc:306] Processing Done. Mar 17 18:08:13.535078 update_engine[1625]: I20250317 18:08:13.535019 1625 update_attempter.cc:310] Error event sent. Mar 17 18:08:13.535078 update_engine[1625]: I20250317 18:08:13.535036 1625 update_check_scheduler.cc:74] Next update check in 48m56s Mar 17 18:08:13.535428 locksmithd[1671]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 18:08:13.535428 locksmithd[1671]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 18:08:13.562109 sshd[4857]: Accepted publickey for core from 139.178.68.195 port 51264 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:08:13.564816 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:08:13.573425 systemd-logind[1624]: New session 21 of user core. Mar 17 18:08:13.578843 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 18:08:14.584572 kubelet[3108]: I0317 18:08:14.584532 3108 topology_manager.go:215] "Topology Admit Handler" podUID="2974b1a2-f5f1-4c5b-bad1-11d146160385" podNamespace="kube-system" podName="cilium-rxz8s" Mar 17 18:08:14.584985 kubelet[3108]: E0317 18:08:14.584592 3108 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="beb5c595-6ad8-4c10-bd42-4ea1f075736d" containerName="clean-cilium-state" Mar 17 18:08:14.584985 kubelet[3108]: E0317 18:08:14.584602 3108 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="beb5c595-6ad8-4c10-bd42-4ea1f075736d" containerName="apply-sysctl-overwrites" Mar 17 18:08:14.584985 kubelet[3108]: E0317 18:08:14.584611 3108 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="beb5c595-6ad8-4c10-bd42-4ea1f075736d" containerName="mount-bpf-fs" Mar 17 18:08:14.584985 kubelet[3108]: E0317 18:08:14.584617 3108 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="beb5c595-6ad8-4c10-bd42-4ea1f075736d" containerName="cilium-agent" Mar 17 18:08:14.584985 kubelet[3108]: E0317 18:08:14.584623 3108 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="beb5c595-6ad8-4c10-bd42-4ea1f075736d" containerName="mount-cgroup" Mar 17 18:08:14.584985 kubelet[3108]: E0317 18:08:14.584629 3108 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c631824-e69f-4681-b0f5-d67417577ed5" containerName="cilium-operator" Mar 17 18:08:14.584985 kubelet[3108]: I0317 18:08:14.584649 3108 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb5c595-6ad8-4c10-bd42-4ea1f075736d" containerName="cilium-agent" Mar 17 18:08:14.584985 kubelet[3108]: I0317 18:08:14.584656 3108 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c631824-e69f-4681-b0f5-d67417577ed5" containerName="cilium-operator" Mar 17 18:08:14.677024 kubelet[3108]: I0317 18:08:14.676978 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2974b1a2-f5f1-4c5b-bad1-11d146160385-cilium-ipsec-secrets\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677024 kubelet[3108]: I0317 18:08:14.677018 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf4ds\" (UniqueName: \"kubernetes.io/projected/2974b1a2-f5f1-4c5b-bad1-11d146160385-kube-api-access-lf4ds\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677024 kubelet[3108]: I0317 18:08:14.677035 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2974b1a2-f5f1-4c5b-bad1-11d146160385-bpf-maps\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677024 kubelet[3108]: I0317 18:08:14.677051 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2974b1a2-f5f1-4c5b-bad1-11d146160385-hostproc\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677024 kubelet[3108]: I0317 18:08:14.677100 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2974b1a2-f5f1-4c5b-bad1-11d146160385-cilium-cgroup\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677024 kubelet[3108]: I0317 18:08:14.677142 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2974b1a2-f5f1-4c5b-bad1-11d146160385-lib-modules\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677596 kubelet[3108]: I0317 18:08:14.677161 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2974b1a2-f5f1-4c5b-bad1-11d146160385-xtables-lock\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677596 kubelet[3108]: I0317 18:08:14.677193 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2974b1a2-f5f1-4c5b-bad1-11d146160385-cni-path\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677596 kubelet[3108]: I0317 18:08:14.677216 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2974b1a2-f5f1-4c5b-bad1-11d146160385-etc-cni-netd\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677596 kubelet[3108]: I0317 18:08:14.677241 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2974b1a2-f5f1-4c5b-bad1-11d146160385-clustermesh-secrets\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677596 kubelet[3108]: I0317 18:08:14.677264 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2974b1a2-f5f1-4c5b-bad1-11d146160385-cilium-run\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677596 kubelet[3108]: I0317 18:08:14.677281 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2974b1a2-f5f1-4c5b-bad1-11d146160385-host-proc-sys-net\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677810 kubelet[3108]: I0317 18:08:14.677309 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2974b1a2-f5f1-4c5b-bad1-11d146160385-host-proc-sys-kernel\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677810 kubelet[3108]: I0317 18:08:14.677387 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2974b1a2-f5f1-4c5b-bad1-11d146160385-cilium-config-path\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.677810 kubelet[3108]: I0317 18:08:14.677402 3108 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2974b1a2-f5f1-4c5b-bad1-11d146160385-hubble-tls\") pod \"cilium-rxz8s\" (UID: \"2974b1a2-f5f1-4c5b-bad1-11d146160385\") " pod="kube-system/cilium-rxz8s" Mar 17 18:08:14.766277 sshd[4862]: Connection closed by 139.178.68.195 port 51264 Mar 17 18:08:14.767091 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Mar 17 18:08:14.772834 systemd[1]: sshd@21-157.180.43.77:22-139.178.68.195:51264.service: Deactivated successfully. Mar 17 18:08:14.773885 systemd-logind[1624]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:08:14.780838 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:08:14.783909 systemd-logind[1624]: Removed session 21. Mar 17 18:08:14.899623 containerd[1649]: time="2025-03-17T18:08:14.899517063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxz8s,Uid:2974b1a2-f5f1-4c5b-bad1-11d146160385,Namespace:kube-system,Attempt:0,}" Mar 17 18:08:14.924364 containerd[1649]: time="2025-03-17T18:08:14.924269407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:08:14.924494 containerd[1649]: time="2025-03-17T18:08:14.924353496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:08:14.924494 containerd[1649]: time="2025-03-17T18:08:14.924368025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:08:14.924494 containerd[1649]: time="2025-03-17T18:08:14.924445462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:08:14.938716 systemd[1]: Started sshd@22-157.180.43.77:22-139.178.68.195:51276.service - OpenSSH per-connection server daemon (139.178.68.195:51276). Mar 17 18:08:14.962181 containerd[1649]: time="2025-03-17T18:08:14.962141748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxz8s,Uid:2974b1a2-f5f1-4c5b-bad1-11d146160385,Namespace:kube-system,Attempt:0,} returns sandbox id \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\"" Mar 17 18:08:14.965115 containerd[1649]: time="2025-03-17T18:08:14.964989186Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:08:14.975588 containerd[1649]: time="2025-03-17T18:08:14.975535816Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9cf5bdeb995e5d63a84473a58801af568eef80bb4b1f9994c506191003da6ada\"" Mar 17 18:08:14.976917 containerd[1649]: time="2025-03-17T18:08:14.976026079Z" level=info msg="StartContainer for \"9cf5bdeb995e5d63a84473a58801af568eef80bb4b1f9994c506191003da6ada\"" Mar 17 18:08:15.023705 containerd[1649]: time="2025-03-17T18:08:15.023659774Z" level=info msg="StartContainer for \"9cf5bdeb995e5d63a84473a58801af568eef80bb4b1f9994c506191003da6ada\" returns successfully" Mar 17 18:08:15.072904 containerd[1649]: time="2025-03-17T18:08:15.072846538Z" level=info msg="shim disconnected" id=9cf5bdeb995e5d63a84473a58801af568eef80bb4b1f9994c506191003da6ada namespace=k8s.io Mar 17 18:08:15.072904 containerd[1649]: time="2025-03-17T18:08:15.072895321Z" level=warning msg="cleaning up after shim disconnected" id=9cf5bdeb995e5d63a84473a58801af568eef80bb4b1f9994c506191003da6ada namespace=k8s.io Mar 17 18:08:15.072904 containerd[1649]: time="2025-03-17T18:08:15.072904939Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:08:15.083190 containerd[1649]: time="2025-03-17T18:08:15.083153391Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:08:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:08:15.534681 containerd[1649]: time="2025-03-17T18:08:15.534617352Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:08:15.554772 containerd[1649]: time="2025-03-17T18:08:15.554665300Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5aa8489b62b1bdb0ec5efc628c4a2c972ae9fe880d41f0b36ea9bdb41f341c1f\"" Mar 17 18:08:15.556558 containerd[1649]: time="2025-03-17T18:08:15.555375091Z" level=info msg="StartContainer for \"5aa8489b62b1bdb0ec5efc628c4a2c972ae9fe880d41f0b36ea9bdb41f341c1f\"" Mar 17 18:08:15.623562 containerd[1649]: time="2025-03-17T18:08:15.623523919Z" level=info msg="StartContainer for \"5aa8489b62b1bdb0ec5efc628c4a2c972ae9fe880d41f0b36ea9bdb41f341c1f\" returns successfully" Mar 17 18:08:15.651947 containerd[1649]: time="2025-03-17T18:08:15.651903500Z" level=info msg="shim disconnected" id=5aa8489b62b1bdb0ec5efc628c4a2c972ae9fe880d41f0b36ea9bdb41f341c1f namespace=k8s.io Mar 17 18:08:15.652148 containerd[1649]: time="2025-03-17T18:08:15.651942034Z" level=warning msg="cleaning up after shim disconnected" id=5aa8489b62b1bdb0ec5efc628c4a2c972ae9fe880d41f0b36ea9bdb41f341c1f namespace=k8s.io Mar 17 18:08:15.652148 containerd[1649]: time="2025-03-17T18:08:15.651967763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:08:15.662651 containerd[1649]: time="2025-03-17T18:08:15.662620864Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:08:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:08:15.806434 kubelet[3108]: E0317 18:08:15.805780 3108 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:08:15.938783 sshd[4903]: Accepted publickey for core from 139.178.68.195 port 51276 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:08:15.941160 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:08:15.948449 systemd-logind[1624]: New session 22 of user core. Mar 17 18:08:15.953836 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 18:08:16.530817 containerd[1649]: time="2025-03-17T18:08:16.530771123Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:08:16.550798 containerd[1649]: time="2025-03-17T18:08:16.550767741Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"90ac8eba39b84b134d7e7e7b8c9591a3871a50e6ecdfa32308d2ddd91cb4b472\"" Mar 17 18:08:16.551662 containerd[1649]: time="2025-03-17T18:08:16.551623329Z" level=info msg="StartContainer for \"90ac8eba39b84b134d7e7e7b8c9591a3871a50e6ecdfa32308d2ddd91cb4b472\"" Mar 17 18:08:16.623117 containerd[1649]: time="2025-03-17T18:08:16.623017672Z" level=info msg="StartContainer for \"90ac8eba39b84b134d7e7e7b8c9591a3871a50e6ecdfa32308d2ddd91cb4b472\" returns successfully" Mar 17 18:08:16.625351 sshd[5047]: Connection closed by 139.178.68.195 port 51276 Mar 17 18:08:16.626456 sshd-session[4903]: pam_unix(sshd:session): session closed for user core Mar 17 18:08:16.633074 systemd[1]: sshd@22-157.180.43.77:22-139.178.68.195:51276.service: Deactivated successfully. Mar 17 18:08:16.638631 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:08:16.640074 systemd-logind[1624]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:08:16.642720 systemd-logind[1624]: Removed session 22. Mar 17 18:08:16.661498 containerd[1649]: time="2025-03-17T18:08:16.661452783Z" level=info msg="shim disconnected" id=90ac8eba39b84b134d7e7e7b8c9591a3871a50e6ecdfa32308d2ddd91cb4b472 namespace=k8s.io Mar 17 18:08:16.661777 containerd[1649]: time="2025-03-17T18:08:16.661726744Z" level=warning msg="cleaning up after shim disconnected" id=90ac8eba39b84b134d7e7e7b8c9591a3871a50e6ecdfa32308d2ddd91cb4b472 namespace=k8s.io Mar 17 18:08:16.661777 containerd[1649]: time="2025-03-17T18:08:16.661757122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:08:16.786987 systemd[1]: Started sshd@23-157.180.43.77:22-139.178.68.195:34674.service - OpenSSH per-connection server daemon (139.178.68.195:34674). Mar 17 18:08:16.803826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90ac8eba39b84b134d7e7e7b8c9591a3871a50e6ecdfa32308d2ddd91cb4b472-rootfs.mount: Deactivated successfully. Mar 17 18:08:17.534537 containerd[1649]: time="2025-03-17T18:08:17.534027017Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:08:17.557216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4104372882.mount: Deactivated successfully. Mar 17 18:08:17.558722 containerd[1649]: time="2025-03-17T18:08:17.557896842Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"559048f2f367a175b9c8feec44cbf334e1f37df77d864561bf2fd16278d7db44\"" Mar 17 18:08:17.559016 containerd[1649]: time="2025-03-17T18:08:17.558967908Z" level=info msg="StartContainer for \"559048f2f367a175b9c8feec44cbf334e1f37df77d864561bf2fd16278d7db44\"" Mar 17 18:08:17.617178 containerd[1649]: time="2025-03-17T18:08:17.617037710Z" level=info msg="StartContainer for \"559048f2f367a175b9c8feec44cbf334e1f37df77d864561bf2fd16278d7db44\" returns successfully" Mar 17 18:08:17.635607 containerd[1649]: time="2025-03-17T18:08:17.635420104Z" level=info msg="shim disconnected" id=559048f2f367a175b9c8feec44cbf334e1f37df77d864561bf2fd16278d7db44 namespace=k8s.io Mar 17 18:08:17.635607 containerd[1649]: time="2025-03-17T18:08:17.635481160Z" level=warning msg="cleaning up after shim disconnected" id=559048f2f367a175b9c8feec44cbf334e1f37df77d864561bf2fd16278d7db44 namespace=k8s.io Mar 17 18:08:17.635607 containerd[1649]: time="2025-03-17T18:08:17.635489055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:08:17.769623 sshd[5111]: Accepted publickey for core from 139.178.68.195 port 34674 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:08:17.773192 sshd-session[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:08:17.781810 systemd-logind[1624]: New session 23 of user core. Mar 17 18:08:17.786902 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 18:08:17.803220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-559048f2f367a175b9c8feec44cbf334e1f37df77d864561bf2fd16278d7db44-rootfs.mount: Deactivated successfully. Mar 17 18:08:18.543172 containerd[1649]: time="2025-03-17T18:08:18.543073315Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:08:18.566259 containerd[1649]: time="2025-03-17T18:08:18.566217175Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50ab016c490a7b8727922e379c7784beb5e10f9eee57167305788d0e86fbb9f5\"" Mar 17 18:08:18.571978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747468778.mount: Deactivated successfully. Mar 17 18:08:18.574542 containerd[1649]: time="2025-03-17T18:08:18.574085994Z" level=info msg="StartContainer for \"50ab016c490a7b8727922e379c7784beb5e10f9eee57167305788d0e86fbb9f5\"" Mar 17 18:08:18.635448 containerd[1649]: time="2025-03-17T18:08:18.635273690Z" level=info msg="StartContainer for \"50ab016c490a7b8727922e379c7784beb5e10f9eee57167305788d0e86fbb9f5\" returns successfully" Mar 17 18:08:19.120362 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:08:19.570688 kubelet[3108]: I0317 18:08:19.569980 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rxz8s" podStartSLOduration=5.56995522 podStartE2EDuration="5.56995522s" podCreationTimestamp="2025-03-17 18:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:08:19.569602079 +0000 UTC m=+354.013431973" watchObservedRunningTime="2025-03-17 18:08:19.56995522 +0000 UTC m=+354.013785116" Mar 17 18:08:25.678547 containerd[1649]: time="2025-03-17T18:08:25.678469808Z" level=info msg="StopPodSandbox for \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\"" Mar 17 18:08:25.679869 containerd[1649]: time="2025-03-17T18:08:25.679532188Z" level=info msg="TearDown network for sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" successfully" Mar 17 18:08:25.679869 containerd[1649]: time="2025-03-17T18:08:25.679620347Z" level=info msg="StopPodSandbox for \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" returns successfully" Mar 17 18:08:25.681346 containerd[1649]: time="2025-03-17T18:08:25.681235387Z" level=info msg="RemovePodSandbox for \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\"" Mar 17 18:08:25.681346 containerd[1649]: time="2025-03-17T18:08:25.681294940Z" level=info msg="Forcibly stopping sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\"" Mar 17 18:08:25.681699 containerd[1649]: time="2025-03-17T18:08:25.681483659Z" level=info msg="TearDown network for sandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" successfully" Mar 17 18:08:25.689399 containerd[1649]: time="2025-03-17T18:08:25.689246239Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:08:25.689399 containerd[1649]: time="2025-03-17T18:08:25.689356519Z" level=info msg="RemovePodSandbox \"ede20c343d41340a5ce69ab79ffb7a4757fad09187f887d71e0d4eb21bce2742\" returns successfully" Mar 17 18:08:25.690120 containerd[1649]: time="2025-03-17T18:08:25.689955498Z" level=info msg="StopPodSandbox for \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\"" Mar 17 18:08:25.690252 containerd[1649]: time="2025-03-17T18:08:25.690082169Z" level=info msg="TearDown network for sandbox \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\" successfully" Mar 17 18:08:25.690252 containerd[1649]: time="2025-03-17T18:08:25.690143516Z" level=info msg="StopPodSandbox for \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\" returns successfully" Mar 17 18:08:25.690704 containerd[1649]: time="2025-03-17T18:08:25.690626574Z" level=info msg="RemovePodSandbox for \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\"" Mar 17 18:08:25.690704 containerd[1649]: time="2025-03-17T18:08:25.690674675Z" level=info msg="Forcibly stopping sandbox \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\"" Mar 17 18:08:25.690865 containerd[1649]: time="2025-03-17T18:08:25.690759627Z" level=info msg="TearDown network for sandbox \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\" successfully" Mar 17 18:08:25.698369 containerd[1649]: time="2025-03-17T18:08:25.696968874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:08:25.698369 containerd[1649]: time="2025-03-17T18:08:25.697085125Z" level=info msg="RemovePodSandbox \"3399c66cf8db958b6e1a184391381bf5803fe6f4416ff9686dc6ac166eb54dfa\" returns successfully" Mar 17 18:08:31.498456 systemd[1]: run-containerd-runc-k8s.io-50ab016c490a7b8727922e379c7784beb5e10f9eee57167305788d0e86fbb9f5-runc.YoE2ce.mount: Deactivated successfully. Mar 17 18:08:35.833222 kubelet[3108]: E0317 18:08:35.833179 3108 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49792->127.0.0.1:46273: write tcp 127.0.0.1:49792->127.0.0.1:46273: write: broken pipe Mar 17 18:08:41.347945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50ab016c490a7b8727922e379c7784beb5e10f9eee57167305788d0e86fbb9f5-rootfs.mount: Deactivated successfully. Mar 17 18:08:41.358287 containerd[1649]: time="2025-03-17T18:08:41.358235108Z" level=info msg="shim disconnected" id=50ab016c490a7b8727922e379c7784beb5e10f9eee57167305788d0e86fbb9f5 namespace=k8s.io Mar 17 18:08:41.358287 containerd[1649]: time="2025-03-17T18:08:41.358287787Z" level=warning msg="cleaning up after shim disconnected" id=50ab016c490a7b8727922e379c7784beb5e10f9eee57167305788d0e86fbb9f5 namespace=k8s.io Mar 17 18:08:41.358756 containerd[1649]: time="2025-03-17T18:08:41.358297025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:08:41.597768 kubelet[3108]: I0317 18:08:41.597717 3108 scope.go:117] "RemoveContainer" containerID="50ab016c490a7b8727922e379c7784beb5e10f9eee57167305788d0e86fbb9f5" Mar 17 18:08:41.604152 containerd[1649]: time="2025-03-17T18:08:41.603940517Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:1,}" Mar 17 18:08:41.622077 containerd[1649]: time="2025-03-17T18:08:41.622038473Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for &ContainerMetadata{Name:cilium-agent,Attempt:1,} returns container id \"c912df716955fc81836c4423170d5dcd5631d7c51936b1452ea52167a89de99a\"" Mar 17 18:08:41.624590 containerd[1649]: time="2025-03-17T18:08:41.624558190Z" level=info msg="StartContainer for \"c912df716955fc81836c4423170d5dcd5631d7c51936b1452ea52167a89de99a\"" Mar 17 18:08:41.625442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838986433.mount: Deactivated successfully. Mar 17 18:08:41.680694 containerd[1649]: time="2025-03-17T18:08:41.680278391Z" level=info msg="StartContainer for \"c912df716955fc81836c4423170d5dcd5631d7c51936b1452ea52167a89de99a\" returns successfully" Mar 17 18:08:41.744419 containerd[1649]: time="2025-03-17T18:08:41.744367498Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:08:46.467899 systemd[1]: run-containerd-runc-k8s.io-c912df716955fc81836c4423170d5dcd5631d7c51936b1452ea52167a89de99a-runc.bV4Xin.mount: Deactivated successfully. Mar 17 18:08:57.210446 kubelet[3108]: E0317 18:08:57.210111 3108 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46740->127.0.0.1:46273: write tcp 127.0.0.1:46740->127.0.0.1:46273: write: connection reset by peer Mar 17 18:09:01.460163 systemd[1]: run-containerd-runc-k8s.io-c912df716955fc81836c4423170d5dcd5631d7c51936b1452ea52167a89de99a-runc.tmgNca.mount: Deactivated successfully. Mar 17 18:09:04.242161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c912df716955fc81836c4423170d5dcd5631d7c51936b1452ea52167a89de99a-rootfs.mount: Deactivated successfully. Mar 17 18:09:04.249312 containerd[1649]: time="2025-03-17T18:09:04.249180770Z" level=info msg="shim disconnected" id=c912df716955fc81836c4423170d5dcd5631d7c51936b1452ea52167a89de99a namespace=k8s.io Mar 17 18:09:04.249312 containerd[1649]: time="2025-03-17T18:09:04.249261453Z" level=warning msg="cleaning up after shim disconnected" id=c912df716955fc81836c4423170d5dcd5631d7c51936b1452ea52167a89de99a namespace=k8s.io Mar 17 18:09:04.249312 containerd[1649]: time="2025-03-17T18:09:04.249277603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:09:04.655682 kubelet[3108]: I0317 18:09:04.655519 3108 scope.go:117] "RemoveContainer" containerID="50ab016c490a7b8727922e379c7784beb5e10f9eee57167305788d0e86fbb9f5" Mar 17 18:09:04.656563 kubelet[3108]: I0317 18:09:04.656513 3108 scope.go:117] "RemoveContainer" containerID="c912df716955fc81836c4423170d5dcd5631d7c51936b1452ea52167a89de99a" Mar 17 18:09:04.661468 containerd[1649]: time="2025-03-17T18:09:04.661393373Z" level=info msg="RemoveContainer for \"50ab016c490a7b8727922e379c7784beb5e10f9eee57167305788d0e86fbb9f5\"" Mar 17 18:09:04.666210 kubelet[3108]: E0317 18:09:04.666003 3108 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cilium-agent pod=cilium-rxz8s_kube-system(2974b1a2-f5f1-4c5b-bad1-11d146160385)\"" pod="kube-system/cilium-rxz8s" podUID="2974b1a2-f5f1-4c5b-bad1-11d146160385" Mar 17 18:09:04.670086 containerd[1649]: time="2025-03-17T18:09:04.670035398Z" level=info msg="RemoveContainer for \"50ab016c490a7b8727922e379c7784beb5e10f9eee57167305788d0e86fbb9f5\" returns successfully" Mar 17 18:09:05.669484 kubelet[3108]: I0317 18:09:05.668935 3108 scope.go:117] "RemoveContainer" containerID="c912df716955fc81836c4423170d5dcd5631d7c51936b1452ea52167a89de99a" Mar 17 18:09:05.671655 kubelet[3108]: E0317 18:09:05.670587 3108 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cilium-agent pod=cilium-rxz8s_kube-system(2974b1a2-f5f1-4c5b-bad1-11d146160385)\"" pod="kube-system/cilium-rxz8s" podUID="2974b1a2-f5f1-4c5b-bad1-11d146160385" Mar 17 18:09:14.900121 kubelet[3108]: I0317 18:09:14.900034 3108 scope.go:117] "RemoveContainer" containerID="c912df716955fc81836c4423170d5dcd5631d7c51936b1452ea52167a89de99a" Mar 17 18:09:14.903699 containerd[1649]: time="2025-03-17T18:09:14.903633266Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:2,}" Mar 17 18:09:14.922828 containerd[1649]: time="2025-03-17T18:09:14.922765273Z" level=info msg="CreateContainer within sandbox \"90dcd2ccc548a9dea47f9c2a10459ab60c7074fdcd958b64e0a522b22f44ea1f\" for &ContainerMetadata{Name:cilium-agent,Attempt:2,} returns container id \"5f7de8ae7eb0b5c117162934b4704b6e42462d619b6bc18f7e4742ea914eac8c\"" Mar 17 18:09:14.924583 containerd[1649]: time="2025-03-17T18:09:14.923597562Z" level=info msg="StartContainer for \"5f7de8ae7eb0b5c117162934b4704b6e42462d619b6bc18f7e4742ea914eac8c\"" Mar 17 18:09:15.002956 containerd[1649]: time="2025-03-17T18:09:15.002912292Z" level=info msg="StartContainer for \"5f7de8ae7eb0b5c117162934b4704b6e42462d619b6bc18f7e4742ea914eac8c\" returns successfully" Mar 17 18:09:18.722165 sshd[5173]: Connection closed by 139.178.68.195 port 34674 Mar 17 18:09:18.723609 sshd-session[5111]: pam_unix(sshd:session): session closed for user core Mar 17 18:09:18.729121 systemd[1]: sshd@23-157.180.43.77:22-139.178.68.195:34674.service: Deactivated successfully. Mar 17 18:09:18.738949 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:09:18.739677 systemd-logind[1624]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:09:18.742302 systemd-logind[1624]: Removed session 23. Mar 17 18:09:22.279646 systemd-networkd[1251]: lxc_health: Link UP Mar 17 18:09:22.287774 systemd-networkd[1251]: lxc_health: Gained carrier Mar 17 18:09:23.381552 systemd-networkd[1251]: lxc_health: Gained IPv6LL Mar 17 18:09:37.716585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7938860e061915e152fd2fb8894ec45107adf2e985d373f733360ce026dc908c-rootfs.mount: Deactivated successfully. Mar 17 18:09:37.736405 containerd[1649]: time="2025-03-17T18:09:37.736250113Z" level=info msg="shim disconnected" id=7938860e061915e152fd2fb8894ec45107adf2e985d373f733360ce026dc908c namespace=k8s.io Mar 17 18:09:37.736405 containerd[1649]: time="2025-03-17T18:09:37.736373467Z" level=warning msg="cleaning up after shim disconnected" id=7938860e061915e152fd2fb8894ec45107adf2e985d373f733360ce026dc908c namespace=k8s.io Mar 17 18:09:37.736405 containerd[1649]: time="2025-03-17T18:09:37.736386793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:09:37.762591 kubelet[3108]: I0317 18:09:37.762544 3108 scope.go:117] "RemoveContainer" containerID="7938860e061915e152fd2fb8894ec45107adf2e985d373f733360ce026dc908c" Mar 17 18:09:37.766774 containerd[1649]: time="2025-03-17T18:09:37.766732966Z" level=info msg="CreateContainer within sandbox \"6ea5dc64b185e002eed5573a1d7f997c509da889766ff383138d41bd19b98e0d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 18:09:37.785849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3486791890.mount: Deactivated successfully. Mar 17 18:09:37.786680 containerd[1649]: time="2025-03-17T18:09:37.786526440Z" level=info msg="CreateContainer within sandbox \"6ea5dc64b185e002eed5573a1d7f997c509da889766ff383138d41bd19b98e0d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a6ad97a42884a62b23af5224f3fded1e0d6cc565b2c2fd5948e17d52fcb10e8c\"" Mar 17 18:09:37.787645 containerd[1649]: time="2025-03-17T18:09:37.787381099Z" level=info msg="StartContainer for \"a6ad97a42884a62b23af5224f3fded1e0d6cc565b2c2fd5948e17d52fcb10e8c\"" Mar 17 18:09:37.857430 containerd[1649]: time="2025-03-17T18:09:37.857389802Z" level=info msg="StartContainer for \"a6ad97a42884a62b23af5224f3fded1e0d6cc565b2c2fd5948e17d52fcb10e8c\" returns successfully" Mar 17 18:09:38.178804 kubelet[3108]: E0317 18:09:38.178701 3108 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48022->10.0.0.2:2379: read: connection timed out" Mar 17 18:09:38.717949 systemd[1]: run-containerd-runc-k8s.io-a6ad97a42884a62b23af5224f3fded1e0d6cc565b2c2fd5948e17d52fcb10e8c-runc.Hwo59h.mount: Deactivated successfully. Mar 17 18:09:41.496080 kubelet[3108]: E0317 18:09:41.485292 3108 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-2-2-c2b93240d2.182da979ce50c874 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-2-2-c2b93240d2,UID:287f41dc6d4ec93844315a960ed65279,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-2-c2b93240d2,},FirstTimestamp:2025-03-17 18:09:31.479787636 +0000 UTC m=+425.923617551,LastTimestamp:2025-03-17 18:09:31.479787636 +0000 UTC m=+425.923617551,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-2-c2b93240d2,}" Mar 17 18:09:43.485825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fee7570f5177368fd911b07e68fc07b2fb0ff6ccd651c8bce2892dd227d4d0b-rootfs.mount: Deactivated successfully. Mar 17 18:09:43.497861 containerd[1649]: time="2025-03-17T18:09:43.497751110Z" level=info msg="shim disconnected" id=9fee7570f5177368fd911b07e68fc07b2fb0ff6ccd651c8bce2892dd227d4d0b namespace=k8s.io Mar 17 18:09:43.497861 containerd[1649]: time="2025-03-17T18:09:43.497832693Z" level=warning msg="cleaning up after shim disconnected" id=9fee7570f5177368fd911b07e68fc07b2fb0ff6ccd651c8bce2892dd227d4d0b namespace=k8s.io Mar 17 18:09:43.499054 containerd[1649]: time="2025-03-17T18:09:43.497875775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:09:43.791616 kubelet[3108]: I0317 18:09:43.790907 3108 scope.go:117] "RemoveContainer" containerID="9fee7570f5177368fd911b07e68fc07b2fb0ff6ccd651c8bce2892dd227d4d0b" Mar 17 18:09:43.795832 containerd[1649]: time="2025-03-17T18:09:43.795689608Z" level=info msg="CreateContainer within sandbox \"41f78a8c15b247360d112964a2df96b65c4e3df277f9e4bc4ea87844d2684e0c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 18:09:43.823764 containerd[1649]: time="2025-03-17T18:09:43.823486971Z" level=info msg="CreateContainer within sandbox \"41f78a8c15b247360d112964a2df96b65c4e3df277f9e4bc4ea87844d2684e0c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c2a6906b4fb8a9ea273804176acf0e2053a8c13b4628a95ec99fd6232c20e452\"" Mar 17 18:09:43.825912 containerd[1649]: time="2025-03-17T18:09:43.824580802Z" level=info msg="StartContainer for \"c2a6906b4fb8a9ea273804176acf0e2053a8c13b4628a95ec99fd6232c20e452\"" Mar 17 18:09:43.824834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount433350978.mount: Deactivated successfully. Mar 17 18:09:43.911196 containerd[1649]: time="2025-03-17T18:09:43.911156600Z" level=info msg="StartContainer for \"c2a6906b4fb8a9ea273804176acf0e2053a8c13b4628a95ec99fd6232c20e452\" returns successfully" Mar 17 18:09:48.180251 kubelet[3108]: E0317 18:09:48.179458 3108 controller.go:195] "Failed to update lease" err="Put \"https://157.180.43.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-2-c2b93240d2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:09:48.217651 kubelet[3108]: I0317 18:09:48.217566 3108 status_manager.go:853] "Failed to get status for pod" podUID="be81feb501b16f752e62adfbff9952de" pod="kube-system/kube-controller-manager-ci-4152-2-2-2-c2b93240d2" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:47968->10.0.0.2:2379: read: connection timed out"