Mar 17 17:54:48.953917 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:54:48.953942 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:54:48.953950 kernel: BIOS-provided physical RAM map: Mar 17 17:54:48.953957 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:54:48.953963 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:54:48.953969 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:54:48.953979 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Mar 17 17:54:48.953992 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Mar 17 17:54:48.954008 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 17:54:48.954016 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 17:54:48.954025 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:54:48.954033 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:54:48.954041 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:54:48.954047 kernel: NX (Execute Disable) protection: active Mar 17 17:54:48.954058 kernel: APIC: Static calls initialized Mar 17 17:54:48.954065 kernel: SMBIOS 3.0.0 present. Mar 17 17:54:48.954071 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Mar 17 17:54:48.954078 kernel: Hypervisor detected: KVM Mar 17 17:54:48.954084 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:54:48.954091 kernel: kvm-clock: using sched offset of 2743552997 cycles Mar 17 17:54:48.954101 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:54:48.954111 kernel: tsc: Detected 2495.312 MHz processor Mar 17 17:54:48.954121 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:54:48.954165 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:54:48.954173 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Mar 17 17:54:48.954180 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:54:48.954187 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:54:48.954195 kernel: Using GB pages for direct mapping Mar 17 17:54:48.954205 kernel: ACPI: Early table checksum verification disabled Mar 17 17:54:48.954214 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Mar 17 17:54:48.954224 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:54:48.954233 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:54:48.954243 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:54:48.954250 kernel: ACPI: FACS 0x000000007CFE0000 000040 Mar 17 17:54:48.954257 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:54:48.954263 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:54:48.954278 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:54:48.954285 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:54:48.954294 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Mar 17 17:54:48.954304 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Mar 17 17:54:48.954323 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Mar 17 17:54:48.954330 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Mar 17 17:54:48.954337 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Mar 17 17:54:48.954345 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Mar 17 17:54:48.954352 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Mar 17 17:54:48.954358 kernel: No NUMA configuration found Mar 17 17:54:48.954366 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Mar 17 17:54:48.954376 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Mar 17 17:54:48.954383 kernel: Zone ranges: Mar 17 17:54:48.954393 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:54:48.954403 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Mar 17 17:54:48.954414 kernel: Normal empty Mar 17 17:54:48.954422 kernel: Movable zone start for each node Mar 17 17:54:48.954429 kernel: Early memory node ranges Mar 17 17:54:48.954436 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:54:48.954443 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Mar 17 17:54:48.954454 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Mar 17 17:54:48.954461 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:54:48.954468 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:54:48.954475 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 17:54:48.954484 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:54:48.954494 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:54:48.954504 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:54:48.954514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:54:48.954521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:54:48.954531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:54:48.954538 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:54:48.954545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:54:48.954553 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:54:48.954560 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:54:48.954567 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:54:48.954575 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:54:48.954585 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 17:54:48.954595 kernel: Booting paravirtualized kernel on KVM Mar 17 17:54:48.954608 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:54:48.954619 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:54:48.954627 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:54:48.954634 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:54:48.954643 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:54:48.954650 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 17:54:48.954659 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:54:48.954667 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:54:48.954680 kernel: random: crng init done Mar 17 17:54:48.954691 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:54:48.954703 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:54:48.954713 kernel: Fallback order for Node 0: 0 Mar 17 17:54:48.954722 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Mar 17 17:54:48.954729 kernel: Policy zone: DMA32 Mar 17 17:54:48.954736 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:54:48.954744 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 125152K reserved, 0K cma-reserved) Mar 17 17:54:48.954751 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:54:48.954761 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:54:48.954768 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:54:48.954775 kernel: Dynamic Preempt: voluntary Mar 17 17:54:48.954785 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:54:48.954796 kernel: rcu: RCU event tracing is enabled. Mar 17 17:54:48.954806 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:54:48.954816 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:54:48.954823 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:54:48.954830 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:54:48.954837 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:54:48.954848 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:54:48.954855 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 17:54:48.954862 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:54:48.954869 kernel: Console: colour VGA+ 80x25 Mar 17 17:54:48.954879 kernel: printk: console [tty0] enabled Mar 17 17:54:48.954889 kernel: printk: console [ttyS0] enabled Mar 17 17:54:48.954899 kernel: ACPI: Core revision 20230628 Mar 17 17:54:48.954909 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:54:48.954916 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:54:48.954926 kernel: x2apic enabled Mar 17 17:54:48.954934 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:54:48.954941 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:54:48.954948 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:54:48.954955 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Mar 17 17:54:48.954962 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:54:48.954971 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:54:48.954982 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:54:48.955006 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:54:48.955014 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:54:48.955021 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:54:48.955028 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:54:48.955038 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:54:48.955046 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:54:48.955053 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:54:48.955062 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:54:48.955073 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:54:48.955087 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:54:48.955098 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:54:48.955105 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:54:48.955113 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:54:48.955120 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:54:48.955128 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:54:48.957165 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:54:48.957173 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:54:48.957185 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:54:48.957196 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:54:48.957207 kernel: landlock: Up and running. Mar 17 17:54:48.957217 kernel: SELinux: Initializing. Mar 17 17:54:48.957226 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:54:48.957233 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:54:48.957241 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:54:48.957249 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:54:48.957256 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:54:48.957267 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:54:48.957285 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:54:48.957296 kernel: ... version: 0 Mar 17 17:54:48.957307 kernel: ... bit width: 48 Mar 17 17:54:48.957315 kernel: ... generic registers: 6 Mar 17 17:54:48.957323 kernel: ... value mask: 0000ffffffffffff Mar 17 17:54:48.957330 kernel: ... max period: 00007fffffffffff Mar 17 17:54:48.957338 kernel: ... fixed-purpose events: 0 Mar 17 17:54:48.957345 kernel: ... event mask: 000000000000003f Mar 17 17:54:48.957356 kernel: signal: max sigframe size: 1776 Mar 17 17:54:48.957363 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:54:48.957371 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:54:48.957382 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:54:48.957393 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:54:48.957402 kernel: .... node #0, CPUs: #1 Mar 17 17:54:48.957410 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:54:48.957417 kernel: smpboot: Max logical packages: 1 Mar 17 17:54:48.957425 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Mar 17 17:54:48.957435 kernel: devtmpfs: initialized Mar 17 17:54:48.957443 kernel: x86/mm: Memory block size: 128MB Mar 17 17:54:48.957450 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:54:48.957458 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:54:48.957468 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:54:48.957479 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:54:48.957489 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:54:48.957497 kernel: audit: type=2000 audit(1742234087.989:1): state=initialized audit_enabled=0 res=1 Mar 17 17:54:48.957504 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:54:48.957514 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:54:48.957522 kernel: cpuidle: using governor menu Mar 17 17:54:48.957529 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:54:48.957537 kernel: dca service started, version 1.12.1 Mar 17 17:54:48.957544 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 17:54:48.957554 kernel: PCI: Using configuration type 1 for base access Mar 17 17:54:48.957565 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:54:48.957575 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:54:48.957583 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:54:48.957594 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:54:48.957601 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:54:48.957609 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:54:48.957616 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:54:48.957623 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:54:48.957631 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:54:48.957640 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:54:48.957651 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:54:48.957664 kernel: ACPI: Interpreter enabled Mar 17 17:54:48.957675 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:54:48.957683 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:54:48.957693 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:54:48.957700 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:54:48.957709 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:54:48.957717 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:54:48.957923 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:54:48.958073 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:54:48.958240 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:54:48.958252 kernel: PCI host bridge to bus 0000:00 Mar 17 17:54:48.958399 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:54:48.958515 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:54:48.958630 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:54:48.958741 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Mar 17 17:54:48.958852 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 17:54:48.958970 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 17:54:48.959109 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:54:48.959938 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:54:48.960074 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Mar 17 17:54:48.960247 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Mar 17 17:54:48.960414 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Mar 17 17:54:48.960569 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Mar 17 17:54:48.960743 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Mar 17 17:54:48.960918 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:54:48.961066 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:54:48.961210 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Mar 17 17:54:48.961351 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 17:54:48.961475 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Mar 17 17:54:48.963266 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 17:54:48.963410 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Mar 17 17:54:48.963542 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 17:54:48.963667 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Mar 17 17:54:48.963804 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 17:54:48.965315 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Mar 17 17:54:48.965447 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 17:54:48.965569 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Mar 17 17:54:48.965703 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 17:54:48.965827 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Mar 17 17:54:48.965957 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 17:54:48.966079 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Mar 17 17:54:48.969364 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:54:48.969511 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Mar 17 17:54:48.969674 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:54:48.969824 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:54:48.969978 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:54:48.970124 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Mar 17 17:54:48.970303 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Mar 17 17:54:48.970461 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:54:48.970598 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 17:54:48.970746 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:54:48.970887 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Mar 17 17:54:48.971017 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 17 17:54:48.971163 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Mar 17 17:54:48.971319 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 17:54:48.971444 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 17:54:48.971596 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 17 17:54:48.971757 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 17:54:48.971888 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Mar 17 17:54:48.972041 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 17:54:48.972200 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 17:54:48.972366 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 17:54:48.972512 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 17 17:54:48.972646 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Mar 17 17:54:48.972774 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Mar 17 17:54:48.972896 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 17:54:48.973018 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 17:54:48.976357 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 17:54:48.976539 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 17 17:54:48.976689 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 17 17:54:48.976839 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 17:54:48.976977 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 17:54:48.977168 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 17:54:48.977348 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 17:54:48.977489 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Mar 17 17:54:48.977616 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Mar 17 17:54:48.977738 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 17:54:48.977858 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 17:54:48.977979 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 17:54:48.978121 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 17 17:54:48.978280 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Mar 17 17:54:48.978410 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Mar 17 17:54:48.978554 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 17:54:48.978678 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 17:54:48.978808 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 17:54:48.978818 kernel: acpiphp: Slot [0] registered Mar 17 17:54:48.978954 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:54:48.979097 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Mar 17 17:54:48.979286 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Mar 17 17:54:48.979447 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Mar 17 17:54:48.979602 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 17:54:48.979736 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 17:54:48.979860 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 17:54:48.979874 kernel: acpiphp: Slot [0-2] registered Mar 17 17:54:48.980016 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 17:54:48.982164 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Mar 17 17:54:48.982332 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 17:54:48.982356 kernel: acpiphp: Slot [0-3] registered Mar 17 17:54:48.982500 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 17:54:48.982644 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 17:54:48.982782 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 17:54:48.982794 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:54:48.982805 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:54:48.982816 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:54:48.982827 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:54:48.982837 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:54:48.982849 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:54:48.982857 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:54:48.982865 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:54:48.982873 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:54:48.982881 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:54:48.982888 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:54:48.982898 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:54:48.982909 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:54:48.982920 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:54:48.982933 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:54:48.982941 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:54:48.982948 kernel: iommu: Default domain type: Translated Mar 17 17:54:48.982956 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:54:48.982963 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:54:48.982971 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:54:48.982979 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:54:48.982987 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Mar 17 17:54:48.984538 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:54:48.984689 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:54:48.984813 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:54:48.984823 kernel: vgaarb: loaded Mar 17 17:54:48.984831 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:54:48.984839 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:54:48.984847 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:54:48.984854 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:54:48.984862 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:54:48.984870 kernel: pnp: PnP ACPI init Mar 17 17:54:48.985004 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 17:54:48.985015 kernel: pnp: PnP ACPI: found 5 devices Mar 17 17:54:48.985024 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:54:48.985032 kernel: NET: Registered PF_INET protocol family Mar 17 17:54:48.985039 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:54:48.985047 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 17:54:48.985055 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:54:48.985062 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:54:48.985073 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 17 17:54:48.985081 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 17:54:48.985088 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:54:48.985096 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:54:48.985104 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:54:48.985111 kernel: NET: Registered PF_XDP protocol family Mar 17 17:54:48.986213 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 17:54:48.986352 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 17:54:48.986484 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 17:54:48.986607 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 17:54:48.986729 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 17:54:48.986849 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 17:54:48.986970 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 17:54:48.987092 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 17:54:48.987243 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 17 17:54:48.987376 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 17:54:48.987504 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 17:54:48.987625 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 17:54:48.987787 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 17:54:48.987917 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 17:54:48.988039 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 17:54:48.990258 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 17:54:48.990403 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 17:54:48.990543 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 17:54:48.990668 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 17:54:48.990789 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 17:54:48.990908 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 17:54:48.991028 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 17:54:48.991168 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 17:54:48.991302 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 17:54:48.991423 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 17:54:48.991543 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Mar 17 17:54:48.991668 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 17:54:48.991796 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 17:54:48.991923 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 17:54:48.992047 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Mar 17 17:54:48.994246 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Mar 17 17:54:48.994385 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 17:54:48.994512 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 17:54:48.994633 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Mar 17 17:54:48.994758 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 17:54:48.994881 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 17:54:48.994999 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:54:48.995115 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:54:48.995247 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:54:48.995371 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Mar 17 17:54:48.995481 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 17:54:48.995591 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 17:54:48.995720 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 17 17:54:48.995839 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 17 17:54:48.995968 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 17 17:54:48.996086 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 17:54:48.998281 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 17 17:54:48.998407 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 17:54:48.998531 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 17 17:54:48.998648 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 17:54:48.998791 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 17 17:54:48.998908 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 17:54:48.999034 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Mar 17 17:54:49.000173 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 17:54:49.000315 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Mar 17 17:54:49.000434 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 17 17:54:49.000554 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 17:54:49.000679 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Mar 17 17:54:49.000803 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Mar 17 17:54:49.000918 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 17:54:49.001044 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Mar 17 17:54:49.005956 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 17 17:54:49.006084 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 17:54:49.006102 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:54:49.006110 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:54:49.006119 kernel: Initialise system trusted keyrings Mar 17 17:54:49.006127 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 17:54:49.006150 kernel: Key type asymmetric registered Mar 17 17:54:49.006158 kernel: Asymmetric key parser 'x509' registered Mar 17 17:54:49.006166 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:54:49.006174 kernel: io scheduler mq-deadline registered Mar 17 17:54:49.006182 kernel: io scheduler kyber registered Mar 17 17:54:49.006190 kernel: io scheduler bfq registered Mar 17 17:54:49.006332 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 17 17:54:49.006458 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 17 17:54:49.006583 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 17 17:54:49.006704 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 17 17:54:49.006845 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 17 17:54:49.006991 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 17 17:54:49.007154 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 17 17:54:49.007308 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 17 17:54:49.007460 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 17 17:54:49.007608 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 17 17:54:49.007750 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 17 17:54:49.007898 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 17 17:54:49.008038 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 17 17:54:49.008203 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 17 17:54:49.008362 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 17 17:54:49.008505 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 17 17:54:49.008523 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:54:49.008669 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Mar 17 17:54:49.008818 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Mar 17 17:54:49.008835 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:54:49.008847 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Mar 17 17:54:49.008855 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:54:49.008864 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:54:49.008872 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:54:49.008880 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:54:49.008892 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:54:49.009044 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 17:54:49.009215 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 17:54:49.009366 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T17:54:48 UTC (1742234088) Mar 17 17:54:49.009379 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:54:49.009517 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 17:54:49.009529 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:54:49.009542 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:54:49.009550 kernel: Segment Routing with IPv6 Mar 17 17:54:49.009558 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:54:49.009566 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:54:49.009575 kernel: Key type dns_resolver registered Mar 17 17:54:49.009587 kernel: IPI shorthand broadcast: enabled Mar 17 17:54:49.009598 kernel: sched_clock: Marking stable (1148008212, 153302344)->(1315783727, -14473171) Mar 17 17:54:49.009610 kernel: registered taskstats version 1 Mar 17 17:54:49.009618 kernel: Loading compiled-in X.509 certificates Mar 17 17:54:49.009626 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:54:49.009637 kernel: Key type .fscrypt registered Mar 17 17:54:49.009645 kernel: Key type fscrypt-provisioning registered Mar 17 17:54:49.009655 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:54:49.009663 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:54:49.009673 kernel: ima: No architecture policies found Mar 17 17:54:49.009684 kernel: clk: Disabling unused clocks Mar 17 17:54:49.009695 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:54:49.009706 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:54:49.009717 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:54:49.009724 kernel: Run /init as init process Mar 17 17:54:49.009733 kernel: with arguments: Mar 17 17:54:49.009741 kernel: /init Mar 17 17:54:49.009749 kernel: with environment: Mar 17 17:54:49.009757 kernel: HOME=/ Mar 17 17:54:49.009766 kernel: TERM=linux Mar 17 17:54:49.009777 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:54:49.009792 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:54:49.009806 systemd[1]: Detected virtualization kvm. Mar 17 17:54:49.009815 systemd[1]: Detected architecture x86-64. Mar 17 17:54:49.009823 systemd[1]: Running in initrd. Mar 17 17:54:49.009831 systemd[1]: No hostname configured, using default hostname. Mar 17 17:54:49.009839 systemd[1]: Hostname set to <localhost>. Mar 17 17:54:49.009848 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:54:49.009857 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:54:49.009870 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:54:49.009886 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:54:49.009895 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:54:49.009904 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:54:49.009913 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:54:49.009922 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:54:49.009932 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:54:49.009943 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:54:49.009955 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:54:49.009967 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:54:49.009979 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:54:49.009987 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:54:49.009996 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:54:49.010004 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:54:49.010012 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:54:49.010021 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:54:49.010032 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:54:49.010042 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:54:49.010054 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:54:49.010066 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:54:49.010076 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:54:49.010084 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:54:49.010093 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:54:49.010101 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:54:49.010113 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:54:49.010121 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:54:49.011309 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:54:49.011346 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:54:49.011387 systemd-journald[188]: Collecting audit messages is disabled. Mar 17 17:54:49.011416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:54:49.011425 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:54:49.011435 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:54:49.011443 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:54:49.011453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:54:49.011466 systemd-journald[188]: Journal started Mar 17 17:54:49.011486 systemd-journald[188]: Runtime Journal (/run/log/journal/3db38ba185c7420cb84d031aa6e92fa0) is 4.8M, max 38.4M, 33.6M free. Mar 17 17:54:48.983196 systemd-modules-load[189]: Inserted module 'overlay' Mar 17 17:54:49.017429 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:54:49.022145 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:54:49.024144 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:54:49.028193 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:54:49.037185 kernel: Bridge firewalling registered Mar 17 17:54:49.036885 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:54:49.039349 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 17 17:54:49.040834 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:54:49.041705 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:54:49.048305 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:54:49.053282 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:54:49.054948 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:54:49.062290 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:54:49.073173 dracut-cmdline[212]: dracut-dracut-053 Mar 17 17:54:49.072166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:54:49.077439 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:54:49.078675 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:54:49.087550 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:54:49.097361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:54:49.130605 systemd-resolved[248]: Positive Trust Anchors: Mar 17 17:54:49.131386 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:54:49.132098 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:54:49.136678 systemd-resolved[248]: Defaulting to hostname 'linux'. Mar 17 17:54:49.138612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:54:49.139167 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:54:49.154176 kernel: SCSI subsystem initialized Mar 17 17:54:49.164157 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:54:49.176161 kernel: iscsi: registered transport (tcp) Mar 17 17:54:49.198173 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:54:49.198239 kernel: QLogic iSCSI HBA Driver Mar 17 17:54:49.249112 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:54:49.254256 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:54:49.281396 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:54:49.281446 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:54:49.284159 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:54:49.328167 kernel: raid6: avx2x4 gen() 28659 MB/s Mar 17 17:54:49.345160 kernel: raid6: avx2x2 gen() 29824 MB/s Mar 17 17:54:49.362373 kernel: raid6: avx2x1 gen() 25037 MB/s Mar 17 17:54:49.362432 kernel: raid6: using algorithm avx2x2 gen() 29824 MB/s Mar 17 17:54:49.382169 kernel: raid6: .... xor() 19177 MB/s, rmw enabled Mar 17 17:54:49.382223 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:54:49.403165 kernel: xor: automatically using best checksumming function avx Mar 17 17:54:49.558176 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:54:49.571510 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:54:49.577333 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:54:49.591848 systemd-udevd[407]: Using default interface naming scheme 'v255'. Mar 17 17:54:49.597210 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:54:49.604807 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:54:49.621788 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Mar 17 17:54:49.656522 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:54:49.663265 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:54:49.739738 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:54:49.748387 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:54:49.764964 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:54:49.767864 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:54:49.768994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:54:49.770163 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:54:49.776317 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:54:49.788569 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:54:49.824152 kernel: libata version 3.00 loaded. Mar 17 17:54:49.838237 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:54:49.857149 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 17 17:54:49.860962 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:54:49.870186 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:54:49.972639 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:54:49.972657 kernel: ACPI: bus type USB registered Mar 17 17:54:49.972668 kernel: usbcore: registered new interface driver usbfs Mar 17 17:54:49.972678 kernel: usbcore: registered new interface driver hub Mar 17 17:54:49.972688 kernel: usbcore: registered new device driver usb Mar 17 17:54:49.972698 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:54:49.972852 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:54:49.972991 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:54:49.973006 kernel: AES CTR mode by8 optimization enabled Mar 17 17:54:49.973016 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:54:49.973616 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 17 17:54:49.973762 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 17:54:49.973903 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:54:49.974043 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 17 17:54:49.974206 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 17 17:54:49.974359 kernel: hub 1-0:1.0: USB hub found Mar 17 17:54:49.974538 kernel: hub 1-0:1.0: 4 ports detected Mar 17 17:54:49.974697 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 17:54:49.974898 kernel: hub 2-0:1.0: USB hub found Mar 17 17:54:49.976620 kernel: hub 2-0:1.0: 4 ports detected Mar 17 17:54:49.976965 kernel: scsi host1: ahci Mar 17 17:54:49.977124 kernel: scsi host2: ahci Mar 17 17:54:49.977305 kernel: scsi host3: ahci Mar 17 17:54:49.977475 kernel: scsi host4: ahci Mar 17 17:54:49.977617 kernel: scsi host5: ahci Mar 17 17:54:49.977759 kernel: scsi host6: ahci Mar 17 17:54:49.977903 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Mar 17 17:54:49.977915 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Mar 17 17:54:49.977925 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Mar 17 17:54:49.977940 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Mar 17 17:54:49.977950 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Mar 17 17:54:49.977960 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Mar 17 17:54:49.890584 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:54:49.890703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:54:49.892660 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:54:49.893195 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:54:49.893323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:54:49.894190 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:54:49.907068 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:54:50.019755 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:54:50.026310 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:54:50.044789 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:54:50.185167 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 17:54:50.288267 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:54:50.288341 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:54:50.288367 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 17 17:54:50.288376 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:54:50.288386 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:54:50.290161 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:54:50.291192 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:54:50.292303 kernel: ata1.00: applying bridge limits Mar 17 17:54:50.293254 kernel: ata1.00: configured for UDMA/100 Mar 17 17:54:50.296149 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:54:50.319647 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 17 17:54:50.340648 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 17 17:54:50.340822 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 17:54:50.340977 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 17 17:54:50.341147 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 17:54:50.341319 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:54:50.341331 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:54:50.341341 kernel: GPT:17805311 != 80003071 Mar 17 17:54:50.341355 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:54:50.341364 kernel: GPT:17805311 != 80003071 Mar 17 17:54:50.341374 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:54:50.341383 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:54:50.341393 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 17:54:50.349487 kernel: usbcore: registered new interface driver usbhid Mar 17 17:54:50.349509 kernel: usbhid: USB HID core driver Mar 17 17:54:50.355979 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Mar 17 17:54:50.356002 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:54:50.373910 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:54:50.373924 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 17 17:54:50.374123 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:54:50.390169 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (462) Mar 17 17:54:50.395194 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 17 17:54:50.396819 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (449) Mar 17 17:54:50.405052 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 17 17:54:50.415410 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 17 17:54:50.416610 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 17 17:54:50.422975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:54:50.429630 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:54:50.435200 disk-uuid[575]: Primary Header is updated. Mar 17 17:54:50.435200 disk-uuid[575]: Secondary Entries is updated. Mar 17 17:54:50.435200 disk-uuid[575]: Secondary Header is updated. Mar 17 17:54:50.439165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:54:51.448372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:54:51.448424 disk-uuid[577]: The operation has completed successfully. Mar 17 17:54:51.505263 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:54:51.505412 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:54:51.521281 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:54:51.526559 sh[594]: Success Mar 17 17:54:51.540480 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:54:51.592179 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:54:51.599057 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:54:51.603638 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:54:51.626248 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:54:51.626303 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:54:51.629177 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:54:51.629201 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:54:51.630477 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:54:51.640160 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:54:51.642167 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:54:51.643374 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:54:51.650334 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:54:51.652528 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:54:51.668177 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:54:51.668229 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:54:51.668248 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:54:51.673460 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:54:51.673510 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:54:51.686282 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:54:51.690249 kernel: BTRFS info (device sda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:54:51.699611 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:54:51.706444 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:54:51.784696 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:54:51.793306 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:54:51.801231 ignition[692]: Ignition 2.20.0 Mar 17 17:54:51.801243 ignition[692]: Stage: fetch-offline Mar 17 17:54:51.801292 ignition[692]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:51.801302 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:54:51.801393 ignition[692]: parsed url from cmdline: "" Mar 17 17:54:51.801397 ignition[692]: no config URL provided Mar 17 17:54:51.801402 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:54:51.805414 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:54:51.801411 ignition[692]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:54:51.801416 ignition[692]: failed to fetch config: resource requires networking Mar 17 17:54:51.801588 ignition[692]: Ignition finished successfully Mar 17 17:54:51.819446 systemd-networkd[779]: lo: Link UP Mar 17 17:54:51.819457 systemd-networkd[779]: lo: Gained carrier Mar 17 17:54:51.822316 systemd-networkd[779]: Enumeration completed Mar 17 17:54:51.822399 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:54:51.823165 systemd[1]: Reached target network.target - Network. Mar 17 17:54:51.823412 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:54:51.823417 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:54:51.825253 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:54:51.825257 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:54:51.825864 systemd-networkd[779]: eth0: Link UP Mar 17 17:54:51.825868 systemd-networkd[779]: eth0: Gained carrier Mar 17 17:54:51.825875 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:54:51.830848 systemd-networkd[779]: eth1: Link UP Mar 17 17:54:51.830852 systemd-networkd[779]: eth1: Gained carrier Mar 17 17:54:51.830858 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:54:51.836980 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:54:51.848943 ignition[783]: Ignition 2.20.0 Mar 17 17:54:51.848954 ignition[783]: Stage: fetch Mar 17 17:54:51.849121 ignition[783]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:51.850182 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:54:51.849161 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:54:51.849256 ignition[783]: parsed url from cmdline: "" Mar 17 17:54:51.849261 ignition[783]: no config URL provided Mar 17 17:54:51.849266 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:54:51.849276 ignition[783]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:54:51.849299 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 17 17:54:51.849444 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 17 17:54:51.886185 systemd-networkd[779]: eth0: DHCPv4 address 37.27.0.76/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:54:52.050033 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 17 17:54:52.054253 ignition[783]: GET result: OK Mar 17 17:54:52.054317 ignition[783]: parsing config with SHA512: 832b829967feeea43b92bf384ad6592deadb4b68eb6b9a147c24af73559045ea35e7ec6bd00e61c9e34c7f2bf2a78042ba4c5ec188f94dab1cd1c66bef15abf3 Mar 17 17:54:52.058023 unknown[783]: fetched base config from "system" Mar 17 17:54:52.058038 unknown[783]: fetched base config from "system" Mar 17 17:54:52.058376 ignition[783]: fetch: fetch complete Mar 17 17:54:52.058045 unknown[783]: fetched user config from "hetzner" Mar 17 17:54:52.058381 ignition[783]: fetch: fetch passed Mar 17 17:54:52.058421 ignition[783]: Ignition finished successfully Mar 17 17:54:52.061823 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:54:52.067288 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:54:52.082467 ignition[791]: Ignition 2.20.0 Mar 17 17:54:52.082483 ignition[791]: Stage: kargs Mar 17 17:54:52.082643 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:52.082654 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:54:52.083499 ignition[791]: kargs: kargs passed Mar 17 17:54:52.083548 ignition[791]: Ignition finished successfully Mar 17 17:54:52.086545 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:54:52.097278 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:54:52.109301 ignition[798]: Ignition 2.20.0 Mar 17 17:54:52.109314 ignition[798]: Stage: disks Mar 17 17:54:52.109550 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:52.109561 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:54:52.110399 ignition[798]: disks: disks passed Mar 17 17:54:52.110450 ignition[798]: Ignition finished successfully Mar 17 17:54:52.113449 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:54:52.114687 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:54:52.115455 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:54:52.116495 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:54:52.117693 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:54:52.118812 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:54:52.124334 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:54:52.140190 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 17:54:52.143257 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:54:52.151288 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:54:52.244158 kernel: EXT4-fs (sda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:54:52.244365 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:54:52.245579 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:54:52.252218 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:54:52.255218 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:54:52.257583 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:54:52.260120 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:54:52.261199 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:54:52.268153 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (814) Mar 17 17:54:52.273280 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:54:52.273316 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:54:52.273327 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:54:52.271476 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:54:52.279966 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:54:52.280001 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:54:52.280641 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:54:52.285346 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:54:52.328238 coreos-metadata[816]: Mar 17 17:54:52.328 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 17 17:54:52.329645 coreos-metadata[816]: Mar 17 17:54:52.329 INFO Fetch successful Mar 17 17:54:52.331185 coreos-metadata[816]: Mar 17 17:54:52.330 INFO wrote hostname ci-4152-2-2-5-05efd5484b to /sysroot/etc/hostname Mar 17 17:54:52.333690 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:54:52.336331 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:54:52.341171 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:54:52.346676 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:54:52.350935 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:54:52.447584 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:54:52.466313 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:54:52.471350 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:54:52.477165 kernel: BTRFS info (device sda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:54:52.501260 ignition[931]: INFO : Ignition 2.20.0 Mar 17 17:54:52.501260 ignition[931]: INFO : Stage: mount Mar 17 17:54:52.502885 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:52.502885 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:54:52.504743 ignition[931]: INFO : mount: mount passed Mar 17 17:54:52.505271 ignition[931]: INFO : Ignition finished successfully Mar 17 17:54:52.505245 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:54:52.507000 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:54:52.512256 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:54:52.624455 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:54:52.628267 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:54:52.640170 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) Mar 17 17:54:52.643367 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:54:52.643395 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:54:52.645536 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:54:52.650461 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:54:52.650522 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:54:52.652803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:54:52.677041 ignition[959]: INFO : Ignition 2.20.0 Mar 17 17:54:52.677041 ignition[959]: INFO : Stage: files Mar 17 17:54:52.678386 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:52.678386 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:54:52.678386 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:54:52.680757 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:54:52.680757 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:54:52.682379 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:54:52.683415 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:54:52.684637 unknown[959]: wrote ssh authorized keys file for user: core Mar 17 17:54:52.685529 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:54:52.686711 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:54:52.687636 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:54:52.883884 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:54:53.179640 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:54:53.179640 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:54:53.181722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 17:54:53.261262 systemd-networkd[779]: eth1: Gained IPv6LL Mar 17 17:54:53.325318 systemd-networkd[779]: eth0: Gained IPv6LL Mar 17 17:54:53.929705 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:54:54.775961 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:54:54.775961 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:54:54.777816 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:54:55.344881 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:54:55.643125 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:54:55.643125 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:54:55.645350 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:54:55.645350 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:54:55.645350 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:54:55.645350 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 17:54:55.645350 ignition[959]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:54:55.645350 ignition[959]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:54:55.645350 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 17:54:55.645350 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:54:55.653908 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:54:55.653908 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:54:55.653908 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:54:55.653908 ignition[959]: INFO : files: files passed Mar 17 17:54:55.653908 ignition[959]: INFO : Ignition finished successfully Mar 17 17:54:55.647993 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:54:55.655315 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:54:55.659570 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:54:55.669496 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:54:55.670199 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:54:55.680040 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:54:55.680040 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:54:55.681745 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:54:55.684533 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:54:55.685986 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:54:55.692313 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:54:55.717939 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:54:55.718062 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:54:55.719621 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:54:55.720508 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:54:55.721653 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:54:55.726316 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:54:55.745850 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:54:55.756277 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:54:55.765536 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:54:55.766379 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:54:55.767582 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:54:55.768668 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:54:55.768819 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:54:55.769978 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:54:55.770727 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:54:55.771810 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:54:55.772762 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:54:55.773756 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:54:55.774892 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:54:55.776084 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:54:55.777259 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:54:55.778366 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:54:55.779508 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:54:55.780543 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:54:55.780656 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:54:55.781849 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:54:55.782627 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:54:55.783610 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:54:55.785228 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:54:55.786121 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:54:55.786247 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:54:55.787626 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:54:55.787741 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:54:55.788379 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:54:55.788487 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:54:55.789453 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:54:55.789593 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:54:55.796588 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:54:55.799332 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:54:55.799842 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:54:55.799996 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:54:55.801702 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:54:55.801846 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:54:55.815264 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:54:55.815388 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:54:55.821179 ignition[1012]: INFO : Ignition 2.20.0 Mar 17 17:54:55.821179 ignition[1012]: INFO : Stage: umount Mar 17 17:54:55.821179 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:54:55.821179 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:54:55.825536 ignition[1012]: INFO : umount: umount passed Mar 17 17:54:55.825536 ignition[1012]: INFO : Ignition finished successfully Mar 17 17:54:55.827360 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:54:55.828208 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:54:55.829477 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:54:55.829528 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:54:55.830831 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:54:55.830895 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:54:55.832391 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:54:55.832445 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:54:55.834273 systemd[1]: Stopped target network.target - Network. Mar 17 17:54:55.834792 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:54:55.834847 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:54:55.835410 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:54:55.835832 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:54:55.840941 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:54:55.845012 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:54:55.846157 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:54:55.847171 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:54:55.847226 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:54:55.848188 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:54:55.848234 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:54:55.850072 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:54:55.850159 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:54:55.851083 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:54:55.851165 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:54:55.852247 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:54:55.853401 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:54:55.856794 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:54:55.857949 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:54:55.858067 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:54:55.858331 systemd-networkd[779]: eth1: DHCPv6 lease lost Mar 17 17:54:55.860692 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:54:55.860772 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:54:55.861208 systemd-networkd[779]: eth0: DHCPv6 lease lost Mar 17 17:54:55.865516 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:54:55.865642 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:54:55.872690 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:54:55.873349 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:54:55.884245 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:54:55.884804 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:54:55.884878 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:54:55.885508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:54:55.885556 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:54:55.886504 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:54:55.886553 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:54:55.887775 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:54:55.905198 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:54:55.905947 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:54:55.907816 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:54:55.908417 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:54:55.909790 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:54:55.909901 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:54:55.912017 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:54:55.912074 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:54:55.913341 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:54:55.913381 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:54:55.914398 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:54:55.914448 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:54:55.915934 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:54:55.915980 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:54:55.916978 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:54:55.917028 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:54:55.918203 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:54:55.918251 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:54:55.926254 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:54:55.927442 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:54:55.927516 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:54:55.928047 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:54:55.928093 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:54:55.928623 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:54:55.928668 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:54:55.931223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:54:55.931273 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:54:55.933384 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:54:55.933494 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:54:55.935002 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:54:55.942558 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:54:55.950219 systemd[1]: Switching root. Mar 17 17:54:55.983602 systemd-journald[188]: Journal stopped Mar 17 17:54:57.036234 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 17 17:54:57.036299 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:54:57.036313 kernel: SELinux: policy capability open_perms=1 Mar 17 17:54:57.036329 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:54:57.036350 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:54:57.036362 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:54:57.036373 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:54:57.036393 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:54:57.036404 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:54:57.036415 kernel: audit: type=1403 audit(1742234096.139:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:54:57.036427 systemd[1]: Successfully loaded SELinux policy in 50.911ms. Mar 17 17:54:57.036442 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.178ms. Mar 17 17:54:57.036455 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:54:57.036467 systemd[1]: Detected virtualization kvm. Mar 17 17:54:57.036484 systemd[1]: Detected architecture x86-64. Mar 17 17:54:57.036496 systemd[1]: Detected first boot. Mar 17 17:54:57.036510 systemd[1]: Hostname set to <ci-4152-2-2-5-05efd5484b>. Mar 17 17:54:57.036522 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:54:57.036535 zram_generator::config[1059]: No configuration found. Mar 17 17:54:57.036550 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:54:57.036562 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:54:57.036573 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:54:57.036590 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:54:57.036603 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:54:57.036622 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:54:57.036634 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:54:57.036646 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:54:57.036658 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:54:57.036670 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:54:57.036682 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:54:57.036694 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:54:57.036706 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:54:57.036718 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:54:57.036732 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:54:57.036744 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:54:57.036756 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:54:57.036769 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:54:57.036781 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:54:57.036795 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:54:57.036808 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:54:57.036823 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:54:57.036835 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:54:57.036847 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:54:57.036858 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:54:57.036870 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:54:57.036882 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:54:57.036895 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:54:57.036906 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:54:57.036920 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:54:57.036932 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:54:57.036947 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:54:57.036964 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:54:57.036975 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:54:57.036988 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:54:57.037002 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:54:57.037014 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:54:57.037026 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:54:57.037038 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:54:57.037050 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:54:57.037062 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:54:57.037082 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:54:57.037097 systemd[1]: Reached target machines.target - Containers. Mar 17 17:54:57.037111 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:54:57.037124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:54:57.053324 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:54:57.053343 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:54:57.053356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:54:57.053368 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:54:57.053381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:54:57.053394 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:54:57.053406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:54:57.053425 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:54:57.053437 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:54:57.053450 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:54:57.053467 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:54:57.053485 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:54:57.053500 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:54:57.053512 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:54:57.053525 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:54:57.053540 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:54:57.053552 kernel: fuse: init (API version 7.39) Mar 17 17:54:57.053566 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:54:57.053578 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:54:57.053590 systemd[1]: Stopped verity-setup.service. Mar 17 17:54:57.053603 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:54:57.053640 systemd-journald[1131]: Collecting audit messages is disabled. Mar 17 17:54:57.053661 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:54:57.053678 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:54:57.053690 kernel: loop: module loaded Mar 17 17:54:57.053702 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:54:57.053715 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:54:57.053728 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:54:57.053743 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:54:57.053755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:54:57.053767 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:54:57.053780 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:54:57.053792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:54:57.053807 systemd-journald[1131]: Journal started Mar 17 17:54:57.053841 systemd-journald[1131]: Runtime Journal (/run/log/journal/3db38ba185c7420cb84d031aa6e92fa0) is 4.8M, max 38.4M, 33.6M free. Mar 17 17:54:56.757311 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:54:56.775012 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:54:56.775708 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:54:57.067656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:54:57.067698 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:54:57.061086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:54:57.061261 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:54:57.064252 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:54:57.064408 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:54:57.065371 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:54:57.065526 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:54:57.066339 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:54:57.067148 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:54:57.068553 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:54:57.077757 kernel: ACPI: bus type drm_connector registered Mar 17 17:54:57.076929 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:54:57.077268 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:54:57.097906 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:54:57.101119 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:54:57.110219 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:54:57.116048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:54:57.116616 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:54:57.116651 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:54:57.118089 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:54:57.140279 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:54:57.147278 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:54:57.148865 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:54:57.153268 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:54:57.156360 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:54:57.157011 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:54:57.165292 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:54:57.165864 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:54:57.171180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:54:57.174346 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:54:57.177432 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:54:57.180956 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:54:57.182367 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:54:57.183473 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:54:57.202388 systemd-journald[1131]: Time spent on flushing to /var/log/journal/3db38ba185c7420cb84d031aa6e92fa0 is 61.811ms for 1140 entries. Mar 17 17:54:57.202388 systemd-journald[1131]: System Journal (/var/log/journal/3db38ba185c7420cb84d031aa6e92fa0) is 8.0M, max 584.8M, 576.8M free. Mar 17 17:54:57.302380 systemd-journald[1131]: Received client request to flush runtime journal. Mar 17 17:54:57.302426 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 17:54:57.302452 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:54:57.220543 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:54:57.221373 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:54:57.230313 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:54:57.295555 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:54:57.300553 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 17 17:54:57.300567 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 17 17:54:57.311061 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:54:57.314427 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:54:57.315568 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:54:57.321335 kernel: loop1: detected capacity change from 0 to 140992 Mar 17 17:54:57.331088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:54:57.340345 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:54:57.341465 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:54:57.355403 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:54:57.369257 kernel: loop2: detected capacity change from 0 to 138184 Mar 17 17:54:57.373724 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:54:57.416699 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:54:57.425242 kernel: loop3: detected capacity change from 0 to 8 Mar 17 17:54:57.424403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:54:57.450637 kernel: loop4: detected capacity change from 0 to 210664 Mar 17 17:54:57.461244 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 17 17:54:57.461270 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 17 17:54:57.468490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:54:57.480191 kernel: loop5: detected capacity change from 0 to 140992 Mar 17 17:54:57.505169 kernel: loop6: detected capacity change from 0 to 138184 Mar 17 17:54:57.530168 kernel: loop7: detected capacity change from 0 to 8 Mar 17 17:54:57.533336 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 17 17:54:57.533976 (sd-merge)[1202]: Merged extensions into '/usr'. Mar 17 17:54:57.541559 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:54:57.541727 systemd[1]: Reloading... Mar 17 17:54:57.677226 zram_generator::config[1229]: No configuration found. Mar 17 17:54:57.742171 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:54:57.815453 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:54:57.872690 systemd[1]: Reloading finished in 330 ms. Mar 17 17:54:57.922966 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:54:57.927448 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:54:57.933387 systemd[1]: Starting ensure-sysext.service... Mar 17 17:54:57.938798 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:54:57.951238 systemd[1]: Reloading requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:54:57.951250 systemd[1]: Reloading... Mar 17 17:54:57.976562 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:54:57.977297 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:54:57.978380 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:54:57.978744 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Mar 17 17:54:57.978882 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Mar 17 17:54:57.982596 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:54:57.982690 systemd-tmpfiles[1273]: Skipping /boot Mar 17 17:54:58.002614 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:54:58.003778 systemd-tmpfiles[1273]: Skipping /boot Mar 17 17:54:58.063199 zram_generator::config[1303]: No configuration found. Mar 17 17:54:58.175963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:54:58.230716 systemd[1]: Reloading finished in 279 ms. Mar 17 17:54:58.252196 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:54:58.259623 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:54:58.268179 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:54:58.276507 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:54:58.282198 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:54:58.290815 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:54:58.296411 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:54:58.307412 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:54:58.312967 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:54:58.313173 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:54:58.320202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:54:58.323761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:54:58.332313 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:54:58.333013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:54:58.338360 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:54:58.338853 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:54:58.340290 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:54:58.352447 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:54:58.353449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:54:58.353649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:54:58.359875 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:54:58.362020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:54:58.371378 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:54:58.372125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:54:58.372269 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:54:58.374237 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:54:58.384869 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:54:58.385738 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:54:58.391119 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:54:58.395813 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:54:58.396294 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Mar 17 17:54:58.396827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:54:58.405729 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:54:58.409467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:54:58.411275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:54:58.411413 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:54:58.413412 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:54:58.414675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:54:58.418455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:54:58.419948 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:54:58.420860 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:54:58.422907 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:54:58.423448 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:54:58.425770 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:54:58.427254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:54:58.439791 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:54:58.439861 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:54:58.439882 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:54:58.441486 systemd[1]: Finished ensure-sysext.service. Mar 17 17:54:58.453303 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:54:58.464400 augenrules[1390]: No rules Mar 17 17:54:58.465907 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:54:58.475363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:54:58.475972 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:54:58.476775 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:54:58.476980 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:54:58.593400 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:54:58.649903 systemd-networkd[1403]: lo: Link UP Mar 17 17:54:58.649914 systemd-networkd[1403]: lo: Gained carrier Mar 17 17:54:58.650268 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:54:58.651092 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:54:58.656621 systemd-networkd[1403]: Enumeration completed Mar 17 17:54:58.656721 systemd-timesyncd[1386]: No network connectivity, watching for changes. Mar 17 17:54:58.656779 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:54:58.657263 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:54:58.657267 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:54:58.659817 systemd-networkd[1403]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:54:58.659920 systemd-networkd[1403]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:54:58.661252 systemd-networkd[1403]: eth0: Link UP Mar 17 17:54:58.661324 systemd-networkd[1403]: eth0: Gained carrier Mar 17 17:54:58.661341 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:54:58.665399 systemd-networkd[1403]: eth1: Link UP Mar 17 17:54:58.665406 systemd-networkd[1403]: eth1: Gained carrier Mar 17 17:54:58.665419 systemd-networkd[1403]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:54:58.669588 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:54:58.670661 systemd-resolved[1348]: Positive Trust Anchors: Mar 17 17:54:58.670676 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:54:58.670708 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:54:58.678412 systemd-resolved[1348]: Using system hostname 'ci-4152-2-2-5-05efd5484b'. Mar 17 17:54:58.680707 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:54:58.681386 systemd[1]: Reached target network.target - Network. Mar 17 17:54:58.681857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:54:58.685250 systemd-networkd[1403]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:54:58.686281 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Mar 17 17:54:58.714702 systemd-networkd[1403]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:54:58.724266 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 17 17:54:58.731210 systemd-networkd[1403]: eth0: DHCPv4 address 37.27.0.76/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:54:58.732148 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Mar 17 17:54:58.733055 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Mar 17 17:54:58.738203 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1414) Mar 17 17:54:58.746190 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:54:58.765155 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:54:58.777815 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 17 17:54:58.778156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:54:58.778275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:54:58.785579 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:54:58.796291 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:54:58.805803 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:54:58.808275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:54:58.808313 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:54:58.808327 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:54:58.820422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:54:58.820607 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:54:58.823597 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:54:58.823803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:54:58.829642 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:54:58.832492 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:54:58.832726 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:54:58.831541 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:54:58.832987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:54:58.834654 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:54:58.835260 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:54:58.845302 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Mar 17 17:54:58.857257 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Mar 17 17:54:58.861262 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Mar 17 17:54:58.864194 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:54:58.878269 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:54:58.885456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:54:58.889526 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:54:58.889574 kernel: [drm] features: -context_init Mar 17 17:54:58.891776 kernel: [drm] number of scanouts: 1 Mar 17 17:54:58.891823 kernel: [drm] number of cap sets: 0 Mar 17 17:54:58.895149 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 17 17:54:58.895326 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:54:58.901355 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:54:58.905314 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 17 17:54:58.905349 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:54:58.912159 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:54:58.925414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:54:58.925775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:54:58.941451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:54:58.942874 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:54:58.952268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:54:58.952520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:54:58.958277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:54:59.033078 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:54:59.034349 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:54:59.039286 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:54:59.052739 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:54:59.085637 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:54:59.088212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:54:59.088458 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:54:59.088751 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:54:59.088886 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:54:59.089232 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:54:59.089510 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:54:59.089606 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:54:59.089701 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:54:59.089735 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:54:59.089821 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:54:59.091656 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:54:59.093460 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:54:59.099803 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:54:59.101334 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:54:59.103272 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:54:59.104497 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:54:59.105061 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:54:59.108498 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:54:59.108640 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:54:59.116266 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:54:59.122300 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:54:59.123407 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:54:59.133251 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:54:59.138238 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:54:59.142481 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:54:59.144616 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:54:59.150755 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:54:59.157655 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:54:59.177264 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 17 17:54:59.196790 coreos-metadata[1471]: Mar 17 17:54:59.191 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 17 17:54:59.196790 coreos-metadata[1471]: Mar 17 17:54:59.194 INFO Fetch successful Mar 17 17:54:59.196790 coreos-metadata[1471]: Mar 17 17:54:59.196 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 17 17:54:59.196790 coreos-metadata[1471]: Mar 17 17:54:59.196 INFO Fetch successful Mar 17 17:54:59.180052 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:54:59.207995 jq[1473]: false Mar 17 17:54:59.200318 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:54:59.209305 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:54:59.211378 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:54:59.211856 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:54:59.220630 dbus-daemon[1472]: [system] SELinux support is enabled Mar 17 17:54:59.220753 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:54:59.229340 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:54:59.230995 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:54:59.242569 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:54:59.246243 extend-filesystems[1474]: Found loop4 Mar 17 17:54:59.246243 extend-filesystems[1474]: Found loop5 Mar 17 17:54:59.246243 extend-filesystems[1474]: Found loop6 Mar 17 17:54:59.246243 extend-filesystems[1474]: Found loop7 Mar 17 17:54:59.246243 extend-filesystems[1474]: Found sda Mar 17 17:54:59.246243 extend-filesystems[1474]: Found sda1 Mar 17 17:54:59.246243 extend-filesystems[1474]: Found sda2 Mar 17 17:54:59.246243 extend-filesystems[1474]: Found sda3 Mar 17 17:54:59.246243 extend-filesystems[1474]: Found usr Mar 17 17:54:59.246243 extend-filesystems[1474]: Found sda4 Mar 17 17:54:59.246243 extend-filesystems[1474]: Found sda6 Mar 17 17:54:59.246243 extend-filesystems[1474]: Found sda7 Mar 17 17:54:59.246243 extend-filesystems[1474]: Found sda9 Mar 17 17:54:59.246243 extend-filesystems[1474]: Checking size of /dev/sda9 Mar 17 17:54:59.360932 jq[1492]: true Mar 17 17:54:59.361065 update_engine[1490]: I20250317 17:54:59.339546 1490 main.cc:92] Flatcar Update Engine starting Mar 17 17:54:59.361065 update_engine[1490]: I20250317 17:54:59.358304 1490 update_check_scheduler.cc:74] Next update check in 5m46s Mar 17 17:54:59.375269 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 17 17:54:59.375298 extend-filesystems[1474]: Resized partition /dev/sda9 Mar 17 17:54:59.257619 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:54:59.424640 extend-filesystems[1518]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:54:59.257830 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:54:59.258982 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:54:59.427918 tar[1498]: linux-amd64/helm Mar 17 17:54:59.259257 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:54:59.434716 jq[1499]: true Mar 17 17:54:59.273704 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:54:59.273934 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:54:59.295585 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:54:59.295624 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:54:59.440185 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1405) Mar 17 17:54:59.305423 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:54:59.305445 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:54:59.315359 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:54:59.364365 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:54:59.372615 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:54:59.390092 systemd-logind[1487]: New seat seat0. Mar 17 17:54:59.462125 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:54:59.468322 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:54:59.473121 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Mar 17 17:54:59.473177 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:54:59.473391 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:54:59.548206 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:54:59.550725 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:54:59.555179 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:54:59.563648 systemd[1]: Starting sshkeys.service... Mar 17 17:54:59.586029 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 17 17:54:59.609852 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:54:59.618710 extend-filesystems[1518]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 17:54:59.618710 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 17 17:54:59.618710 extend-filesystems[1518]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 17 17:54:59.633496 extend-filesystems[1474]: Resized filesystem in /dev/sda9 Mar 17 17:54:59.633496 extend-filesystems[1474]: Found sr0 Mar 17 17:54:59.621468 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:54:59.624890 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:54:59.625171 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:54:59.651373 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:54:59.662412 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:54:59.671593 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:54:59.673435 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:54:59.674212 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:54:59.692489 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:54:59.715952 coreos-metadata[1564]: Mar 17 17:54:59.714 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 17 17:54:59.717187 coreos-metadata[1564]: Mar 17 17:54:59.717 INFO Fetch successful Mar 17 17:54:59.719883 unknown[1564]: wrote ssh authorized keys file for user: core Mar 17 17:54:59.721580 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:54:59.735856 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:54:59.749348 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:54:59.751099 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:54:59.773256 containerd[1500]: time="2025-03-17T17:54:59.772569330Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:54:59.777536 update-ssh-keys[1577]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:54:59.780423 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:54:59.788314 systemd[1]: Finished sshkeys.service. Mar 17 17:54:59.812764 containerd[1500]: time="2025-03-17T17:54:59.812589596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:54:59.814573 containerd[1500]: time="2025-03-17T17:54:59.814464632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:54:59.814573 containerd[1500]: time="2025-03-17T17:54:59.814492013Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:54:59.814573 containerd[1500]: time="2025-03-17T17:54:59.814508905Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:54:59.814907 containerd[1500]: time="2025-03-17T17:54:59.814689684Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:54:59.814907 containerd[1500]: time="2025-03-17T17:54:59.814715823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:54:59.814907 containerd[1500]: time="2025-03-17T17:54:59.814804619Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:54:59.814907 containerd[1500]: time="2025-03-17T17:54:59.814817754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:54:59.815057 containerd[1500]: time="2025-03-17T17:54:59.815019622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:54:59.815057 containerd[1500]: time="2025-03-17T17:54:59.815033168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:54:59.815057 containerd[1500]: time="2025-03-17T17:54:59.815045872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:54:59.815057 containerd[1500]: time="2025-03-17T17:54:59.815054859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:54:59.815383 containerd[1500]: time="2025-03-17T17:54:59.815176757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:54:59.815436 containerd[1500]: time="2025-03-17T17:54:59.815410907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:54:59.815579 containerd[1500]: time="2025-03-17T17:54:59.815553714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:54:59.815579 containerd[1500]: time="2025-03-17T17:54:59.815576016Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:54:59.815694 containerd[1500]: time="2025-03-17T17:54:59.815672577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:54:59.816120 containerd[1500]: time="2025-03-17T17:54:59.815735425Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:54:59.820024 containerd[1500]: time="2025-03-17T17:54:59.819611884Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:54:59.820024 containerd[1500]: time="2025-03-17T17:54:59.819654184Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:54:59.820024 containerd[1500]: time="2025-03-17T17:54:59.819669392Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:54:59.820024 containerd[1500]: time="2025-03-17T17:54:59.819686144Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:54:59.820024 containerd[1500]: time="2025-03-17T17:54:59.819704578Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:54:59.820024 containerd[1500]: time="2025-03-17T17:54:59.819836174Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:54:59.820178 containerd[1500]: time="2025-03-17T17:54:59.820054764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:54:59.820203 containerd[1500]: time="2025-03-17T17:54:59.820178166Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:54:59.820203 containerd[1500]: time="2025-03-17T17:54:59.820193755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:54:59.820238 containerd[1500]: time="2025-03-17T17:54:59.820206769Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:54:59.820238 containerd[1500]: time="2025-03-17T17:54:59.820220185Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:54:59.820238 containerd[1500]: time="2025-03-17T17:54:59.820233419Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:54:59.820299 containerd[1500]: time="2025-03-17T17:54:59.820246684Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:54:59.820299 containerd[1500]: time="2025-03-17T17:54:59.820259889Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:54:59.820299 containerd[1500]: time="2025-03-17T17:54:59.820273074Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:54:59.820299 containerd[1500]: time="2025-03-17T17:54:59.820285357Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:54:59.820299 containerd[1500]: time="2025-03-17T17:54:59.820296829Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:54:59.820390 containerd[1500]: time="2025-03-17T17:54:59.820308340Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:54:59.820390 containerd[1500]: time="2025-03-17T17:54:59.820329630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820390 containerd[1500]: time="2025-03-17T17:54:59.820342644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820390 containerd[1500]: time="2025-03-17T17:54:59.820354457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820390 containerd[1500]: time="2025-03-17T17:54:59.820367170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820390 containerd[1500]: time="2025-03-17T17:54:59.820382920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820501 containerd[1500]: time="2025-03-17T17:54:59.820395534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820501 containerd[1500]: time="2025-03-17T17:54:59.820408508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820501 containerd[1500]: time="2025-03-17T17:54:59.820420741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820501 containerd[1500]: time="2025-03-17T17:54:59.820432764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820501 containerd[1500]: time="2025-03-17T17:54:59.820446659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820501 containerd[1500]: time="2025-03-17T17:54:59.820457590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820501 containerd[1500]: time="2025-03-17T17:54:59.820468821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820501 containerd[1500]: time="2025-03-17T17:54:59.820480052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820501 containerd[1500]: time="2025-03-17T17:54:59.820494068Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:54:59.820659 containerd[1500]: time="2025-03-17T17:54:59.820513625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820659 containerd[1500]: time="2025-03-17T17:54:59.820526930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820659 containerd[1500]: time="2025-03-17T17:54:59.820538742Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:54:59.820659 containerd[1500]: time="2025-03-17T17:54:59.820583115Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:54:59.820659 containerd[1500]: time="2025-03-17T17:54:59.820596951Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:54:59.820659 containerd[1500]: time="2025-03-17T17:54:59.820607010Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:54:59.820659 containerd[1500]: time="2025-03-17T17:54:59.820618121Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:54:59.820659 containerd[1500]: time="2025-03-17T17:54:59.820627879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.820659 containerd[1500]: time="2025-03-17T17:54:59.820639461Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:54:59.820659 containerd[1500]: time="2025-03-17T17:54:59.820649239Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:54:59.820659 containerd[1500]: time="2025-03-17T17:54:59.820659078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:54:59.821236 containerd[1500]: time="2025-03-17T17:54:59.820908456Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:54:59.821236 containerd[1500]: time="2025-03-17T17:54:59.820955774Z" level=info msg="Connect containerd service" Mar 17 17:54:59.821236 containerd[1500]: time="2025-03-17T17:54:59.820989116Z" level=info msg="using legacy CRI server" Mar 17 17:54:59.821236 containerd[1500]: time="2025-03-17T17:54:59.820995468Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:54:59.821236 containerd[1500]: time="2025-03-17T17:54:59.821120463Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:54:59.822909 containerd[1500]: time="2025-03-17T17:54:59.821991206Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:54:59.822909 containerd[1500]: time="2025-03-17T17:54:59.822147158Z" level=info msg="Start subscribing containerd event" Mar 17 17:54:59.822909 containerd[1500]: time="2025-03-17T17:54:59.822188376Z" level=info msg="Start recovering state" Mar 17 17:54:59.822909 containerd[1500]: time="2025-03-17T17:54:59.822246314Z" level=info msg="Start event monitor" Mar 17 17:54:59.822909 containerd[1500]: time="2025-03-17T17:54:59.822273095Z" level=info msg="Start snapshots syncer" Mar 17 17:54:59.822909 containerd[1500]: time="2025-03-17T17:54:59.822284356Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:54:59.822909 containerd[1500]: time="2025-03-17T17:54:59.822294475Z" level=info msg="Start streaming server" Mar 17 17:54:59.822909 containerd[1500]: time="2025-03-17T17:54:59.822770557Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:54:59.822909 containerd[1500]: time="2025-03-17T17:54:59.822841089Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:54:59.824178 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:54:59.828601 containerd[1500]: time="2025-03-17T17:54:59.828556818Z" level=info msg="containerd successfully booted in 0.058367s" Mar 17 17:54:59.917327 systemd-networkd[1403]: eth1: Gained IPv6LL Mar 17 17:54:59.918579 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Mar 17 17:54:59.921399 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:54:59.925679 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:54:59.937319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:54:59.946218 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:54:59.993930 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:55:00.055175 tar[1498]: linux-amd64/LICENSE Mar 17 17:55:00.055175 tar[1498]: linux-amd64/README.md Mar 17 17:55:00.068981 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:55:00.622274 systemd-networkd[1403]: eth0: Gained IPv6LL Mar 17 17:55:00.622874 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Mar 17 17:55:00.778857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:00.780456 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:55:00.784477 systemd[1]: Startup finished in 1.290s (kernel) + 7.420s (initrd) + 4.691s (userspace) = 13.401s. Mar 17 17:55:00.789150 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:55:01.359808 kubelet[1602]: E0317 17:55:01.359732 1602 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:55:01.364380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:55:01.364648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:55:11.614940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:55:11.620425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:11.770897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:11.775337 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:55:11.820742 kubelet[1622]: E0317 17:55:11.820684 1622 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:55:11.827690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:55:11.827938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:55:22.021065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:55:22.026341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:22.181255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:22.185683 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:55:22.228092 kubelet[1638]: E0317 17:55:22.228020 1638 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:55:22.232181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:55:22.232424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:55:31.372491 systemd-timesyncd[1386]: Contacted time server 80.153.195.191:123 (2.flatcar.pool.ntp.org). Mar 17 17:55:31.372580 systemd-timesyncd[1386]: Initial clock synchronization to Mon 2025-03-17 17:55:31.372226 UTC. Mar 17 17:55:31.372736 systemd-resolved[1348]: Clock change detected. Flushing caches. Mar 17 17:55:32.821315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:55:32.827603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:32.984345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:32.989560 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:55:33.029998 kubelet[1654]: E0317 17:55:33.029927 1654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:55:33.034604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:55:33.034814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:55:37.539030 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:55:37.543639 systemd[1]: Started sshd@0-37.27.0.76:22-220.81.148.101:42524.service - OpenSSH per-connection server daemon (220.81.148.101:42524). Mar 17 17:55:43.059302 sshd[1664]: maximum authentication attempts exceeded for root from 220.81.148.101 port 42524 ssh2 [preauth] Mar 17 17:55:43.059302 sshd[1664]: Disconnecting authenticating user root 220.81.148.101 port 42524: Too many authentication failures [preauth] Mar 17 17:55:43.061682 systemd[1]: sshd@0-37.27.0.76:22-220.81.148.101:42524.service: Deactivated successfully. Mar 17 17:55:43.065027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:55:43.072772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:43.242392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:43.247442 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:55:43.301513 kubelet[1676]: E0317 17:55:43.301433 1676 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:55:43.306244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:55:43.306486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:55:43.670642 systemd[1]: Started sshd@1-37.27.0.76:22-220.81.148.101:43382.service - OpenSSH per-connection server daemon (220.81.148.101:43382). Mar 17 17:55:44.943881 update_engine[1490]: I20250317 17:55:44.943462 1490 update_attempter.cc:509] Updating boot flags... Mar 17 17:55:44.985455 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1697) Mar 17 17:55:45.040432 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1699) Mar 17 17:55:45.099472 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1699) Mar 17 17:55:48.134072 sshd[1686]: maximum authentication attempts exceeded for root from 220.81.148.101 port 43382 ssh2 [preauth] Mar 17 17:55:48.134072 sshd[1686]: Disconnecting authenticating user root 220.81.148.101 port 43382: Too many authentication failures [preauth] Mar 17 17:55:48.137024 systemd[1]: sshd@1-37.27.0.76:22-220.81.148.101:43382.service: Deactivated successfully. Mar 17 17:55:48.740264 systemd[1]: Started sshd@2-37.27.0.76:22-220.81.148.101:44194.service - OpenSSH per-connection server daemon (220.81.148.101:44194). Mar 17 17:55:53.321523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 17:55:53.326674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:53.491880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:53.507718 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:55:53.545666 kubelet[1722]: E0317 17:55:53.545607 1722 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:55:53.549745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:55:53.549947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:55:54.227222 sshd[1712]: maximum authentication attempts exceeded for root from 220.81.148.101 port 44194 ssh2 [preauth] Mar 17 17:55:54.227222 sshd[1712]: Disconnecting authenticating user root 220.81.148.101 port 44194: Too many authentication failures [preauth] Mar 17 17:55:54.229032 systemd[1]: sshd@2-37.27.0.76:22-220.81.148.101:44194.service: Deactivated successfully. Mar 17 17:55:54.832582 systemd[1]: Started sshd@3-37.27.0.76:22-220.81.148.101:45104.service - OpenSSH per-connection server daemon (220.81.148.101:45104). Mar 17 17:55:59.038594 sshd[1733]: Received disconnect from 220.81.148.101 port 45104:11: disconnected by user [preauth] Mar 17 17:55:59.038594 sshd[1733]: Disconnected from authenticating user root 220.81.148.101 port 45104 [preauth] Mar 17 17:55:59.041522 systemd[1]: sshd@3-37.27.0.76:22-220.81.148.101:45104.service: Deactivated successfully. Mar 17 17:55:59.355689 systemd[1]: Started sshd@4-37.27.0.76:22-220.81.148.101:45678.service - OpenSSH per-connection server daemon (220.81.148.101:45678). Mar 17 17:56:02.636374 sshd[1738]: Invalid user admin from 220.81.148.101 port 45678 Mar 17 17:56:03.571230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 17:56:03.576584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:03.724535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:03.739820 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:56:03.780841 kubelet[1748]: E0317 17:56:03.780795 1748 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:56:03.785343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:56:03.785576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:56:04.170746 sshd[1738]: maximum authentication attempts exceeded for invalid user admin from 220.81.148.101 port 45678 ssh2 [preauth] Mar 17 17:56:04.170746 sshd[1738]: Disconnecting invalid user admin 220.81.148.101 port 45678: Too many authentication failures [preauth] Mar 17 17:56:04.173809 systemd[1]: sshd@4-37.27.0.76:22-220.81.148.101:45678.service: Deactivated successfully. Mar 17 17:56:04.784624 systemd[1]: Started sshd@5-37.27.0.76:22-220.81.148.101:46500.service - OpenSSH per-connection server daemon (220.81.148.101:46500). Mar 17 17:56:06.974547 sshd[1759]: Invalid user admin from 220.81.148.101 port 46500 Mar 17 17:56:08.529530 sshd[1759]: maximum authentication attempts exceeded for invalid user admin from 220.81.148.101 port 46500 ssh2 [preauth] Mar 17 17:56:08.529530 sshd[1759]: Disconnecting invalid user admin 220.81.148.101 port 46500: Too many authentication failures [preauth] Mar 17 17:56:08.532720 systemd[1]: sshd@5-37.27.0.76:22-220.81.148.101:46500.service: Deactivated successfully. Mar 17 17:56:09.168217 systemd[1]: Started sshd@6-37.27.0.76:22-220.81.148.101:47120.service - OpenSSH per-connection server daemon (220.81.148.101:47120). Mar 17 17:56:11.711508 sshd[1764]: Invalid user admin from 220.81.148.101 port 47120 Mar 17 17:56:12.999113 sshd[1764]: Received disconnect from 220.81.148.101 port 47120:11: disconnected by user [preauth] Mar 17 17:56:12.999113 sshd[1764]: Disconnected from invalid user admin 220.81.148.101 port 47120 [preauth] Mar 17 17:56:13.001809 systemd[1]: sshd@6-37.27.0.76:22-220.81.148.101:47120.service: Deactivated successfully. Mar 17 17:56:13.298703 systemd[1]: Started sshd@7-37.27.0.76:22-220.81.148.101:47782.service - OpenSSH per-connection server daemon (220.81.148.101:47782). Mar 17 17:56:13.821251 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 17:56:13.826589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:13.973474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:13.979299 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:56:14.018911 kubelet[1779]: E0317 17:56:14.018843 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:56:14.023561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:56:14.023838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:56:16.840963 sshd[1769]: Invalid user oracle from 220.81.148.101 port 47782 Mar 17 17:56:18.368490 sshd[1769]: maximum authentication attempts exceeded for invalid user oracle from 220.81.148.101 port 47782 ssh2 [preauth] Mar 17 17:56:18.368490 sshd[1769]: Disconnecting invalid user oracle 220.81.148.101 port 47782: Too many authentication failures [preauth] Mar 17 17:56:18.371345 systemd[1]: sshd@7-37.27.0.76:22-220.81.148.101:47782.service: Deactivated successfully. Mar 17 17:56:19.004136 systemd[1]: Started sshd@8-37.27.0.76:22-220.81.148.101:48602.service - OpenSSH per-connection server daemon (220.81.148.101:48602). Mar 17 17:56:21.927025 sshd[1790]: Invalid user oracle from 220.81.148.101 port 48602 Mar 17 17:56:23.520620 sshd[1790]: maximum authentication attempts exceeded for invalid user oracle from 220.81.148.101 port 48602 ssh2 [preauth] Mar 17 17:56:23.520620 sshd[1790]: Disconnecting invalid user oracle 220.81.148.101 port 48602: Too many authentication failures [preauth] Mar 17 17:56:23.524030 systemd[1]: sshd@8-37.27.0.76:22-220.81.148.101:48602.service: Deactivated successfully. Mar 17 17:56:24.071402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 17:56:24.077611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:24.152492 systemd[1]: Started sshd@9-37.27.0.76:22-220.81.148.101:49388.service - OpenSSH per-connection server daemon (220.81.148.101:49388). Mar 17 17:56:24.240300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:24.250690 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:56:24.296222 kubelet[1805]: E0317 17:56:24.296164 1805 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:56:24.300753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:56:24.300968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:56:27.618218 sshd[1798]: Invalid user oracle from 220.81.148.101 port 49388 Mar 17 17:56:28.222630 sshd[1798]: Received disconnect from 220.81.148.101 port 49388:11: disconnected by user [preauth] Mar 17 17:56:28.222630 sshd[1798]: Disconnected from invalid user oracle 220.81.148.101 port 49388 [preauth] Mar 17 17:56:28.225339 systemd[1]: sshd@9-37.27.0.76:22-220.81.148.101:49388.service: Deactivated successfully. Mar 17 17:56:28.563742 systemd[1]: Started sshd@10-37.27.0.76:22-220.81.148.101:49952.service - OpenSSH per-connection server daemon (220.81.148.101:49952). Mar 17 17:56:31.440985 sshd[1816]: Invalid user usuario from 220.81.148.101 port 49952 Mar 17 17:56:33.036127 sshd[1816]: maximum authentication attempts exceeded for invalid user usuario from 220.81.148.101 port 49952 ssh2 [preauth] Mar 17 17:56:33.036127 sshd[1816]: Disconnecting invalid user usuario 220.81.148.101 port 49952: Too many authentication failures [preauth] Mar 17 17:56:33.038774 systemd[1]: sshd@10-37.27.0.76:22-220.81.148.101:49952.service: Deactivated successfully. Mar 17 17:56:33.647770 systemd[1]: Started sshd@11-37.27.0.76:22-220.81.148.101:50714.service - OpenSSH per-connection server daemon (220.81.148.101:50714). Mar 17 17:56:34.321304 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 17 17:56:34.327881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:34.478088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:34.489852 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:56:34.528891 kubelet[1831]: E0317 17:56:34.528830 1831 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:56:34.533001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:56:34.533205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:56:37.018787 sshd[1821]: Invalid user usuario from 220.81.148.101 port 50714 Mar 17 17:56:38.544552 sshd[1821]: maximum authentication attempts exceeded for invalid user usuario from 220.81.148.101 port 50714 ssh2 [preauth] Mar 17 17:56:38.544552 sshd[1821]: Disconnecting invalid user usuario 220.81.148.101 port 50714: Too many authentication failures [preauth] Mar 17 17:56:38.547585 systemd[1]: sshd@11-37.27.0.76:22-220.81.148.101:50714.service: Deactivated successfully. Mar 17 17:56:39.153436 systemd[1]: Started sshd@12-37.27.0.76:22-220.81.148.101:51580.service - OpenSSH per-connection server daemon (220.81.148.101:51580). Mar 17 17:56:41.880708 sshd[1843]: Invalid user usuario from 220.81.148.101 port 51580 Mar 17 17:56:42.487825 sshd[1843]: Received disconnect from 220.81.148.101 port 51580:11: disconnected by user [preauth] Mar 17 17:56:42.487825 sshd[1843]: Disconnected from invalid user usuario 220.81.148.101 port 51580 [preauth] Mar 17 17:56:42.490887 systemd[1]: sshd@12-37.27.0.76:22-220.81.148.101:51580.service: Deactivated successfully. Mar 17 17:56:42.801237 systemd[1]: Started sshd@13-37.27.0.76:22-220.81.148.101:52102.service - OpenSSH per-connection server daemon (220.81.148.101:52102). Mar 17 17:56:44.571203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 17 17:56:44.576819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:44.726117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:44.736758 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:56:44.775639 kubelet[1858]: E0317 17:56:44.775593 1858 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:56:44.780024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:56:44.780271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:56:45.841800 sshd[1848]: Invalid user test from 220.81.148.101 port 52102 Mar 17 17:56:47.380949 sshd[1848]: maximum authentication attempts exceeded for invalid user test from 220.81.148.101 port 52102 ssh2 [preauth] Mar 17 17:56:47.380949 sshd[1848]: Disconnecting invalid user test 220.81.148.101 port 52102: Too many authentication failures [preauth] Mar 17 17:56:47.383796 systemd[1]: sshd@13-37.27.0.76:22-220.81.148.101:52102.service: Deactivated successfully. Mar 17 17:56:48.006645 systemd[1]: Started sshd@14-37.27.0.76:22-220.81.148.101:52854.service - OpenSSH per-connection server daemon (220.81.148.101:52854). Mar 17 17:56:50.742263 sshd[1869]: Invalid user test from 220.81.148.101 port 52854 Mar 17 17:56:52.294799 sshd[1869]: maximum authentication attempts exceeded for invalid user test from 220.81.148.101 port 52854 ssh2 [preauth] Mar 17 17:56:52.294799 sshd[1869]: Disconnecting invalid user test 220.81.148.101 port 52854: Too many authentication failures [preauth] Mar 17 17:56:52.297848 systemd[1]: sshd@14-37.27.0.76:22-220.81.148.101:52854.service: Deactivated successfully. Mar 17 17:56:52.905570 systemd[1]: Started sshd@15-37.27.0.76:22-220.81.148.101:53530.service - OpenSSH per-connection server daemon (220.81.148.101:53530). Mar 17 17:56:54.821506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 17 17:56:54.826657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:54.981583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:54.983259 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:56:55.023475 kubelet[1884]: E0317 17:56:55.023401 1884 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:56:55.027757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:56:55.027959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:56:56.009302 sshd[1874]: Invalid user test from 220.81.148.101 port 53530 Mar 17 17:56:56.618445 sshd[1874]: Received disconnect from 220.81.148.101 port 53530:11: disconnected by user [preauth] Mar 17 17:56:56.618445 sshd[1874]: Disconnected from invalid user test 220.81.148.101 port 53530 [preauth] Mar 17 17:56:56.621037 systemd[1]: sshd@15-37.27.0.76:22-220.81.148.101:53530.service: Deactivated successfully. Mar 17 17:56:56.929203 systemd[1]: Started sshd@16-37.27.0.76:22-220.81.148.101:54116.service - OpenSSH per-connection server daemon (220.81.148.101:54116). Mar 17 17:56:59.708341 systemd[1]: Started sshd@17-37.27.0.76:22-139.178.68.195:46106.service - OpenSSH per-connection server daemon (139.178.68.195:46106). Mar 17 17:56:59.853639 sshd[1895]: Invalid user user from 220.81.148.101 port 54116 Mar 17 17:57:00.681382 sshd[1898]: Accepted publickey for core from 139.178.68.195 port 46106 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 17:57:00.683979 sshd-session[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:00.692058 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:57:00.707675 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:57:00.710978 systemd-logind[1487]: New session 1 of user core. Mar 17 17:57:00.720622 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:57:00.726982 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:57:00.736349 (systemd)[1902]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:57:00.862872 systemd[1902]: Queued start job for default target default.target. Mar 17 17:57:00.873731 systemd[1902]: Created slice app.slice - User Application Slice. Mar 17 17:57:00.873758 systemd[1902]: Reached target paths.target - Paths. Mar 17 17:57:00.873777 systemd[1902]: Reached target timers.target - Timers. Mar 17 17:57:00.875342 systemd[1902]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:57:00.888482 systemd[1902]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:57:00.888759 systemd[1902]: Reached target sockets.target - Sockets. Mar 17 17:57:00.888782 systemd[1902]: Reached target basic.target - Basic System. Mar 17 17:57:00.888828 systemd[1902]: Reached target default.target - Main User Target. Mar 17 17:57:00.888867 systemd[1902]: Startup finished in 145ms. Mar 17 17:57:00.888963 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:57:00.893575 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:57:01.377895 sshd[1895]: maximum authentication attempts exceeded for invalid user user from 220.81.148.101 port 54116 ssh2 [preauth] Mar 17 17:57:01.377895 sshd[1895]: Disconnecting invalid user user 220.81.148.101 port 54116: Too many authentication failures [preauth] Mar 17 17:57:01.381194 systemd[1]: sshd@16-37.27.0.76:22-220.81.148.101:54116.service: Deactivated successfully. Mar 17 17:57:01.579659 systemd[1]: Started sshd@18-37.27.0.76:22-139.178.68.195:46112.service - OpenSSH per-connection server daemon (139.178.68.195:46112). Mar 17 17:57:01.988315 systemd[1]: Started sshd@19-37.27.0.76:22-220.81.148.101:54906.service - OpenSSH per-connection server daemon (220.81.148.101:54906). Mar 17 17:57:02.548630 sshd[1915]: Accepted publickey for core from 139.178.68.195 port 46112 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 17:57:02.550233 sshd-session[1915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:02.554662 systemd-logind[1487]: New session 2 of user core. Mar 17 17:57:02.557533 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:57:03.223357 sshd[1920]: Connection closed by 139.178.68.195 port 46112 Mar 17 17:57:03.223970 sshd-session[1915]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:03.227603 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:57:03.228014 systemd[1]: sshd@18-37.27.0.76:22-139.178.68.195:46112.service: Deactivated successfully. Mar 17 17:57:03.229867 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:57:03.230774 systemd-logind[1487]: Removed session 2. Mar 17 17:57:03.397278 systemd[1]: Started sshd@20-37.27.0.76:22-139.178.68.195:46122.service - OpenSSH per-connection server daemon (139.178.68.195:46122). Mar 17 17:57:04.389887 sshd[1925]: Accepted publickey for core from 139.178.68.195 port 46122 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 17:57:04.391484 sshd-session[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:04.395647 systemd-logind[1487]: New session 3 of user core. Mar 17 17:57:04.405532 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:57:05.071262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 17 17:57:05.074429 sshd[1927]: Connection closed by 139.178.68.195 port 46122 Mar 17 17:57:05.077772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:05.074906 sshd-session[1925]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:05.078456 systemd[1]: sshd@20-37.27.0.76:22-139.178.68.195:46122.service: Deactivated successfully. Mar 17 17:57:05.081306 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:57:05.082263 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:57:05.086109 systemd-logind[1487]: Removed session 3. Mar 17 17:57:05.088566 sshd[1918]: Invalid user user from 220.81.148.101 port 54906 Mar 17 17:57:05.233675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:05.239384 (kubelet)[1939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:57:05.240490 systemd[1]: Started sshd@21-37.27.0.76:22-139.178.68.195:46130.service - OpenSSH per-connection server daemon (139.178.68.195:46130). Mar 17 17:57:05.279525 kubelet[1939]: E0317 17:57:05.279479 1939 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:57:05.284024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:57:05.284238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:57:06.214781 sshd[1941]: Accepted publickey for core from 139.178.68.195 port 46130 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 17:57:06.216297 sshd-session[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:06.220877 systemd-logind[1487]: New session 4 of user core. Mar 17 17:57:06.227537 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:57:06.619609 sshd[1918]: maximum authentication attempts exceeded for invalid user user from 220.81.148.101 port 54906 ssh2 [preauth] Mar 17 17:57:06.619609 sshd[1918]: Disconnecting invalid user user 220.81.148.101 port 54906: Too many authentication failures [preauth] Mar 17 17:57:06.622673 systemd[1]: sshd@19-37.27.0.76:22-220.81.148.101:54906.service: Deactivated successfully. Mar 17 17:57:06.888662 sshd[1950]: Connection closed by 139.178.68.195 port 46130 Mar 17 17:57:06.889494 sshd-session[1941]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:06.892300 systemd[1]: sshd@21-37.27.0.76:22-139.178.68.195:46130.service: Deactivated successfully. Mar 17 17:57:06.894284 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:57:06.895911 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:57:06.897035 systemd-logind[1487]: Removed session 4. Mar 17 17:57:07.055350 systemd[1]: Started sshd@22-37.27.0.76:22-139.178.68.195:50448.service - OpenSSH per-connection server daemon (139.178.68.195:50448). Mar 17 17:57:07.261689 systemd[1]: Started sshd@23-37.27.0.76:22-220.81.148.101:55674.service - OpenSSH per-connection server daemon (220.81.148.101:55674). Mar 17 17:57:08.027809 sshd[1957]: Accepted publickey for core from 139.178.68.195 port 50448 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 17:57:08.029616 sshd-session[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:08.033972 systemd-logind[1487]: New session 5 of user core. Mar 17 17:57:08.041564 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:57:08.556462 sudo[1963]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:57:08.556830 sudo[1963]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:57:08.576058 sudo[1963]: pam_unix(sudo:session): session closed for user root Mar 17 17:57:08.733573 sshd[1962]: Connection closed by 139.178.68.195 port 50448 Mar 17 17:57:08.734518 sshd-session[1957]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:08.737792 systemd[1]: sshd@22-37.27.0.76:22-139.178.68.195:50448.service: Deactivated successfully. Mar 17 17:57:08.740287 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:57:08.742138 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:57:08.743631 systemd-logind[1487]: Removed session 5. Mar 17 17:57:08.908649 systemd[1]: Started sshd@24-37.27.0.76:22-139.178.68.195:50454.service - OpenSSH per-connection server daemon (139.178.68.195:50454). Mar 17 17:57:09.877287 sshd[1968]: Accepted publickey for core from 139.178.68.195 port 50454 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 17:57:09.878916 sshd-session[1968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:09.883472 systemd-logind[1487]: New session 6 of user core. Mar 17 17:57:09.892556 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:57:10.395054 sudo[1972]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:57:10.395400 sudo[1972]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:57:10.398900 sudo[1972]: pam_unix(sudo:session): session closed for user root Mar 17 17:57:10.404957 sudo[1971]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:57:10.405300 sudo[1971]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:57:10.424694 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:57:10.451835 augenrules[1994]: No rules Mar 17 17:57:10.452630 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:57:10.452847 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:57:10.454355 sudo[1971]: pam_unix(sudo:session): session closed for user root Mar 17 17:57:10.611730 sshd[1970]: Connection closed by 139.178.68.195 port 50454 Mar 17 17:57:10.612341 sshd-session[1968]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:10.615060 systemd[1]: sshd@24-37.27.0.76:22-139.178.68.195:50454.service: Deactivated successfully. Mar 17 17:57:10.616996 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:57:10.618443 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:57:10.619399 systemd-logind[1487]: Removed session 6. Mar 17 17:57:10.780373 systemd[1]: Started sshd@25-37.27.0.76:22-139.178.68.195:50466.service - OpenSSH per-connection server daemon (139.178.68.195:50466). Mar 17 17:57:11.454094 sshd[1960]: Invalid user user from 220.81.148.101 port 55674 Mar 17 17:57:11.761533 sshd[2002]: Accepted publickey for core from 139.178.68.195 port 50466 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 17:57:11.762497 sshd-session[2002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:11.767198 systemd-logind[1487]: New session 7 of user core. Mar 17 17:57:11.776547 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:57:12.281886 sudo[2005]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:57:12.282250 sudo[2005]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:57:12.584840 (dockerd)[2024]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:57:12.584968 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:57:12.731005 sshd[1960]: Received disconnect from 220.81.148.101 port 55674:11: disconnected by user [preauth] Mar 17 17:57:12.733817 sshd[1960]: Disconnected from invalid user user 220.81.148.101 port 55674 [preauth] Mar 17 17:57:12.732880 systemd[1]: sshd@23-37.27.0.76:22-220.81.148.101:55674.service: Deactivated successfully. Mar 17 17:57:12.863463 dockerd[2024]: time="2025-03-17T17:57:12.863220908Z" level=info msg="Starting up" Mar 17 17:57:12.929982 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2705293182-merged.mount: Deactivated successfully. Mar 17 17:57:12.970529 dockerd[2024]: time="2025-03-17T17:57:12.970481237Z" level=info msg="Loading containers: start." Mar 17 17:57:13.024763 systemd[1]: Started sshd@26-37.27.0.76:22-220.81.148.101:56558.service - OpenSSH per-connection server daemon (220.81.148.101:56558). Mar 17 17:57:13.140456 kernel: Initializing XFRM netlink socket Mar 17 17:57:13.234323 systemd-networkd[1403]: docker0: Link UP Mar 17 17:57:13.264727 dockerd[2024]: time="2025-03-17T17:57:13.264692241Z" level=info msg="Loading containers: done." Mar 17 17:57:13.284900 dockerd[2024]: time="2025-03-17T17:57:13.284824930Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:57:13.285081 dockerd[2024]: time="2025-03-17T17:57:13.284969149Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:57:13.285189 dockerd[2024]: time="2025-03-17T17:57:13.285153365Z" level=info msg="Daemon has completed initialization" Mar 17 17:57:13.319641 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:57:13.320317 dockerd[2024]: time="2025-03-17T17:57:13.319962084Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:57:14.417124 containerd[1500]: time="2025-03-17T17:57:14.417079637Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:57:15.034639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2062721869.mount: Deactivated successfully. Mar 17 17:57:15.321289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Mar 17 17:57:15.326747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:15.485707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:15.495857 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:57:15.541467 kubelet[2279]: E0317 17:57:15.541252 2279 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:57:15.545338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:57:15.545596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:57:15.701636 sshd[2059]: Invalid user ftpuser from 220.81.148.101 port 56558 Mar 17 17:57:16.224302 containerd[1500]: time="2025-03-17T17:57:16.224242959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:16.225329 containerd[1500]: time="2025-03-17T17:57:16.225282126Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674667" Mar 17 17:57:16.226454 containerd[1500]: time="2025-03-17T17:57:16.226392880Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:16.229109 containerd[1500]: time="2025-03-17T17:57:16.229039538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:16.230935 containerd[1500]: time="2025-03-17T17:57:16.230027817Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 1.812907201s" Mar 17 17:57:16.230935 containerd[1500]: time="2025-03-17T17:57:16.230310052Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 17:57:16.257666 containerd[1500]: time="2025-03-17T17:57:16.257499788Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:57:17.225350 sshd[2059]: maximum authentication attempts exceeded for invalid user ftpuser from 220.81.148.101 port 56558 ssh2 [preauth] Mar 17 17:57:17.225350 sshd[2059]: Disconnecting invalid user ftpuser 220.81.148.101 port 56558: Too many authentication failures [preauth] Mar 17 17:57:17.227988 systemd[1]: sshd@26-37.27.0.76:22-220.81.148.101:56558.service: Deactivated successfully. Mar 17 17:57:17.687348 containerd[1500]: time="2025-03-17T17:57:17.687285300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:17.688357 containerd[1500]: time="2025-03-17T17:57:17.688247415Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619794" Mar 17 17:57:17.689080 containerd[1500]: time="2025-03-17T17:57:17.689039372Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:17.691461 containerd[1500]: time="2025-03-17T17:57:17.691399886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:17.692436 containerd[1500]: time="2025-03-17T17:57:17.692365939Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 1.434833459s" Mar 17 17:57:17.692436 containerd[1500]: time="2025-03-17T17:57:17.692390897Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 17:57:17.718304 containerd[1500]: time="2025-03-17T17:57:17.718258793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:57:17.839388 systemd[1]: Started sshd@27-37.27.0.76:22-220.81.148.101:57182.service - OpenSSH per-connection server daemon (220.81.148.101:57182). Mar 17 17:57:18.701350 containerd[1500]: time="2025-03-17T17:57:18.701290285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:18.702520 containerd[1500]: time="2025-03-17T17:57:18.702471362Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903331" Mar 17 17:57:18.703016 containerd[1500]: time="2025-03-17T17:57:18.702974641Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:18.706510 containerd[1500]: time="2025-03-17T17:57:18.706465550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:18.707451 containerd[1500]: time="2025-03-17T17:57:18.707376156Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 989.074459ms" Mar 17 17:57:18.707451 containerd[1500]: time="2025-03-17T17:57:18.707404370Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 17:57:18.731389 containerd[1500]: time="2025-03-17T17:57:18.731333131Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:57:19.825109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount757604512.mount: Deactivated successfully. Mar 17 17:57:20.201742 containerd[1500]: time="2025-03-17T17:57:20.201593850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:20.204435 containerd[1500]: time="2025-03-17T17:57:20.203215482Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185400" Mar 17 17:57:20.205489 containerd[1500]: time="2025-03-17T17:57:20.205449943Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:20.209547 containerd[1500]: time="2025-03-17T17:57:20.209003743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:20.209753 containerd[1500]: time="2025-03-17T17:57:20.209722085Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.478354519s" Mar 17 17:57:20.209801 containerd[1500]: time="2025-03-17T17:57:20.209754107Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 17:57:20.238093 containerd[1500]: time="2025-03-17T17:57:20.237985463Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:57:20.698901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927473061.mount: Deactivated successfully. Mar 17 17:57:20.867581 sshd[2310]: Invalid user ftpuser from 220.81.148.101 port 57182 Mar 17 17:57:21.374501 containerd[1500]: time="2025-03-17T17:57:21.374425508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:21.375465 containerd[1500]: time="2025-03-17T17:57:21.375309799Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185843" Mar 17 17:57:21.375940 containerd[1500]: time="2025-03-17T17:57:21.375897900Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:21.378563 containerd[1500]: time="2025-03-17T17:57:21.378529311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:21.379806 containerd[1500]: time="2025-03-17T17:57:21.379684954Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.141622503s" Mar 17 17:57:21.379806 containerd[1500]: time="2025-03-17T17:57:21.379713308Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:57:21.404157 containerd[1500]: time="2025-03-17T17:57:21.403923895Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:57:21.829191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616444080.mount: Deactivated successfully. Mar 17 17:57:21.833899 containerd[1500]: time="2025-03-17T17:57:21.833847788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:21.834587 containerd[1500]: time="2025-03-17T17:57:21.834550239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322312" Mar 17 17:57:21.835214 containerd[1500]: time="2025-03-17T17:57:21.835167306Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:21.837028 containerd[1500]: time="2025-03-17T17:57:21.836995282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:21.837835 containerd[1500]: time="2025-03-17T17:57:21.837731257Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 433.774669ms" Mar 17 17:57:21.837835 containerd[1500]: time="2025-03-17T17:57:21.837756767Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 17:57:21.859100 containerd[1500]: time="2025-03-17T17:57:21.858983684Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:57:22.324179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242618664.mount: Deactivated successfully. Mar 17 17:57:22.427760 sshd[2310]: maximum authentication attempts exceeded for invalid user ftpuser from 220.81.148.101 port 57182 ssh2 [preauth] Mar 17 17:57:22.427760 sshd[2310]: Disconnecting invalid user ftpuser 220.81.148.101 port 57182: Too many authentication failures [preauth] Mar 17 17:57:22.431351 systemd[1]: sshd@27-37.27.0.76:22-220.81.148.101:57182.service: Deactivated successfully. Mar 17 17:57:23.071762 systemd[1]: Started sshd@28-37.27.0.76:22-220.81.148.101:57964.service - OpenSSH per-connection server daemon (220.81.148.101:57964). Mar 17 17:57:23.822700 containerd[1500]: time="2025-03-17T17:57:23.822630479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:23.823755 containerd[1500]: time="2025-03-17T17:57:23.823704792Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238653" Mar 17 17:57:23.824785 containerd[1500]: time="2025-03-17T17:57:23.824739480Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:23.827805 containerd[1500]: time="2025-03-17T17:57:23.827767977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:23.829450 containerd[1500]: time="2025-03-17T17:57:23.829238293Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.970152962s" Mar 17 17:57:23.829450 containerd[1500]: time="2025-03-17T17:57:23.829286104Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 17:57:25.571386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Mar 17 17:57:25.581664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:25.744922 sshd[2438]: Invalid user ftpuser from 220.81.148.101 port 57964 Mar 17 17:57:25.812731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:25.814441 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:57:25.871528 kubelet[2509]: E0317 17:57:25.870281 2509 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:57:25.874881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:57:25.875077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:57:26.568309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:26.575801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:26.598887 systemd[1]: Reloading requested from client PID 2523 ('systemctl') (unit session-7.scope)... Mar 17 17:57:26.598901 systemd[1]: Reloading... Mar 17 17:57:26.736114 zram_generator::config[2563]: No configuration found. Mar 17 17:57:26.869557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:57:26.956467 systemd[1]: Reloading finished in 357 ms. Mar 17 17:57:27.012920 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:27.017205 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:57:27.017564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:27.022743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:27.036474 sshd[2438]: Received disconnect from 220.81.148.101 port 57964:11: disconnected by user [preauth] Mar 17 17:57:27.036474 sshd[2438]: Disconnected from invalid user ftpuser 220.81.148.101 port 57964 [preauth] Mar 17 17:57:27.039390 systemd[1]: sshd@28-37.27.0.76:22-220.81.148.101:57964.service: Deactivated successfully. Mar 17 17:57:27.181580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:27.187774 (kubelet)[2623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:57:27.235191 kubelet[2623]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:57:27.235191 kubelet[2623]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:57:27.235191 kubelet[2623]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:57:27.236574 kubelet[2623]: I0317 17:57:27.236510 2623 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:57:27.335488 systemd[1]: Started sshd@29-37.27.0.76:22-220.81.148.101:58584.service - OpenSSH per-connection server daemon (220.81.148.101:58584). Mar 17 17:57:27.504219 kubelet[2623]: I0317 17:57:27.504080 2623 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:57:27.504219 kubelet[2623]: I0317 17:57:27.504108 2623 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:57:27.504366 kubelet[2623]: I0317 17:57:27.504291 2623 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:57:27.528067 kubelet[2623]: I0317 17:57:27.527804 2623 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:57:27.530875 kubelet[2623]: E0317 17:57:27.530797 2623 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://37.27.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:27.543608 kubelet[2623]: I0317 17:57:27.543569 2623 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:57:27.544860 kubelet[2623]: I0317 17:57:27.544815 2623 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:57:27.545026 kubelet[2623]: I0317 17:57:27.544847 2623 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-5-05efd5484b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:57:27.545026 kubelet[2623]: I0317 17:57:27.545023 2623 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:57:27.545141 kubelet[2623]: I0317 17:57:27.545032 2623 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:57:27.545184 kubelet[2623]: I0317 17:57:27.545167 2623 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:57:27.546443 kubelet[2623]: W0317 17:57:27.546341 2623 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-5-05efd5484b&limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:27.546443 kubelet[2623]: E0317 17:57:27.546394 2623 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://37.27.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-5-05efd5484b&limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:27.547584 kubelet[2623]: I0317 17:57:27.547383 2623 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:57:27.547584 kubelet[2623]: I0317 17:57:27.547429 2623 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:57:27.547584 kubelet[2623]: I0317 17:57:27.547464 2623 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:57:27.547584 kubelet[2623]: I0317 17:57:27.547492 2623 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:57:27.550750 kubelet[2623]: W0317 17:57:27.550549 2623 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:27.550750 kubelet[2623]: E0317 17:57:27.550649 2623 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://37.27.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:27.552432 kubelet[2623]: I0317 17:57:27.550868 2623 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:57:27.552432 kubelet[2623]: I0317 17:57:27.552174 2623 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:57:27.552432 kubelet[2623]: W0317 17:57:27.552251 2623 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:57:27.552923 kubelet[2623]: I0317 17:57:27.552901 2623 server.go:1264] "Started kubelet" Mar 17 17:57:27.560359 kubelet[2623]: E0317 17:57:27.560209 2623 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.0.76:6443/api/v1/namespaces/default/events\": dial tcp 37.27.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-2-5-05efd5484b.182da8d140e8d3f3 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-2-5-05efd5484b,UID:ci-4152-2-2-5-05efd5484b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-5-05efd5484b,},FirstTimestamp:2025-03-17 17:57:27.552881651 +0000 UTC m=+0.360551713,LastTimestamp:2025-03-17 17:57:27.552881651 +0000 UTC m=+0.360551713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-5-05efd5484b,}" Mar 17 17:57:27.560777 kubelet[2623]: I0317 17:57:27.560748 2623 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:57:27.562080 kubelet[2623]: I0317 17:57:27.561654 2623 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:57:27.562080 kubelet[2623]: I0317 17:57:27.562027 2623 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:57:27.564223 kubelet[2623]: I0317 17:57:27.564195 2623 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:57:27.566200 kubelet[2623]: I0317 17:57:27.566169 2623 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:57:27.569665 kubelet[2623]: I0317 17:57:27.569633 2623 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:57:27.572196 kubelet[2623]: I0317 17:57:27.571826 2623 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:57:27.572196 kubelet[2623]: I0317 17:57:27.571883 2623 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:57:27.573648 kubelet[2623]: E0317 17:57:27.573628 2623 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:57:27.573973 kubelet[2623]: W0317 17:57:27.573931 2623 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:27.574052 kubelet[2623]: E0317 17:57:27.574041 2623 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://37.27.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:27.574179 kubelet[2623]: E0317 17:57:27.574160 2623 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-5-05efd5484b?timeout=10s\": dial tcp 37.27.0.76:6443: connect: connection refused" interval="200ms" Mar 17 17:57:27.574883 kubelet[2623]: I0317 17:57:27.574866 2623 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:57:27.576252 kubelet[2623]: I0317 17:57:27.576238 2623 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:57:27.576316 kubelet[2623]: I0317 17:57:27.576307 2623 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:57:27.601997 kubelet[2623]: I0317 17:57:27.601958 2623 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:57:27.601997 kubelet[2623]: I0317 17:57:27.601976 2623 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:57:27.601997 kubelet[2623]: I0317 17:57:27.602002 2623 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:57:27.602302 kubelet[2623]: I0317 17:57:27.602266 2623 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:57:27.603767 kubelet[2623]: I0317 17:57:27.603686 2623 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:57:27.603767 kubelet[2623]: I0317 17:57:27.603714 2623 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:57:27.603767 kubelet[2623]: I0317 17:57:27.603733 2623 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:57:27.604371 kubelet[2623]: I0317 17:57:27.604315 2623 policy_none.go:49] "None policy: Start" Mar 17 17:57:27.604748 kubelet[2623]: W0317 17:57:27.604607 2623 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:27.604748 kubelet[2623]: E0317 17:57:27.604634 2623 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://37.27.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:27.604865 kubelet[2623]: E0317 17:57:27.604843 2623 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:57:27.605535 kubelet[2623]: I0317 17:57:27.605474 2623 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:57:27.605535 kubelet[2623]: I0317 17:57:27.605516 2623 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:57:27.612091 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:57:27.634758 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:57:27.638318 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:57:27.654533 kubelet[2623]: I0317 17:57:27.654490 2623 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:57:27.655046 kubelet[2623]: I0317 17:57:27.654995 2623 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:57:27.655170 kubelet[2623]: I0317 17:57:27.655148 2623 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:57:27.657979 kubelet[2623]: E0317 17:57:27.657955 2623 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-2-5-05efd5484b\" not found" Mar 17 17:57:27.672938 kubelet[2623]: I0317 17:57:27.672860 2623 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.673289 kubelet[2623]: E0317 17:57:27.673240 2623 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://37.27.0.76:6443/api/v1/nodes\": dial tcp 37.27.0.76:6443: connect: connection refused" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.705267 kubelet[2623]: I0317 17:57:27.705213 2623 topology_manager.go:215] "Topology Admit Handler" podUID="8a8cbeac4acd1939a3f0aed9dfb3b5cf" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.707024 kubelet[2623]: I0317 17:57:27.706964 2623 topology_manager.go:215] "Topology Admit Handler" podUID="e744c971481bacdee0ab0b693d567ef1" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.708423 kubelet[2623]: I0317 17:57:27.708375 2623 topology_manager.go:215] "Topology Admit Handler" podUID="d2339a0ad5ff6ca620659cd5f0757167" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.715366 systemd[1]: Created slice kubepods-burstable-pod8a8cbeac4acd1939a3f0aed9dfb3b5cf.slice - libcontainer container kubepods-burstable-pod8a8cbeac4acd1939a3f0aed9dfb3b5cf.slice. Mar 17 17:57:27.733565 systemd[1]: Created slice kubepods-burstable-pode744c971481bacdee0ab0b693d567ef1.slice - libcontainer container kubepods-burstable-pode744c971481bacdee0ab0b693d567ef1.slice. Mar 17 17:57:27.747465 systemd[1]: Created slice kubepods-burstable-podd2339a0ad5ff6ca620659cd5f0757167.slice - libcontainer container kubepods-burstable-podd2339a0ad5ff6ca620659cd5f0757167.slice. Mar 17 17:57:27.775099 kubelet[2623]: E0317 17:57:27.774968 2623 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-5-05efd5484b?timeout=10s\": dial tcp 37.27.0.76:6443: connect: connection refused" interval="400ms" Mar 17 17:57:27.873699 kubelet[2623]: I0317 17:57:27.873381 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e744c971481bacdee0ab0b693d567ef1-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-5-05efd5484b\" (UID: \"e744c971481bacdee0ab0b693d567ef1\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.873699 kubelet[2623]: I0317 17:57:27.873453 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2339a0ad5ff6ca620659cd5f0757167-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-5-05efd5484b\" (UID: \"d2339a0ad5ff6ca620659cd5f0757167\") " pod="kube-system/kube-scheduler-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.873699 kubelet[2623]: I0317 17:57:27.873471 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a8cbeac4acd1939a3f0aed9dfb3b5cf-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-5-05efd5484b\" (UID: \"8a8cbeac4acd1939a3f0aed9dfb3b5cf\") " pod="kube-system/kube-apiserver-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.873699 kubelet[2623]: I0317 17:57:27.873487 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a8cbeac4acd1939a3f0aed9dfb3b5cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-5-05efd5484b\" (UID: \"8a8cbeac4acd1939a3f0aed9dfb3b5cf\") " pod="kube-system/kube-apiserver-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.873699 kubelet[2623]: I0317 17:57:27.873522 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e744c971481bacdee0ab0b693d567ef1-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-5-05efd5484b\" (UID: \"e744c971481bacdee0ab0b693d567ef1\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.873936 kubelet[2623]: I0317 17:57:27.873544 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a8cbeac4acd1939a3f0aed9dfb3b5cf-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-5-05efd5484b\" (UID: \"8a8cbeac4acd1939a3f0aed9dfb3b5cf\") " pod="kube-system/kube-apiserver-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.873936 kubelet[2623]: I0317 17:57:27.873573 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e744c971481bacdee0ab0b693d567ef1-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-5-05efd5484b\" (UID: \"e744c971481bacdee0ab0b693d567ef1\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.873936 kubelet[2623]: I0317 17:57:27.873593 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e744c971481bacdee0ab0b693d567ef1-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-5-05efd5484b\" (UID: \"e744c971481bacdee0ab0b693d567ef1\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.873936 kubelet[2623]: I0317 17:57:27.873610 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e744c971481bacdee0ab0b693d567ef1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-5-05efd5484b\" (UID: \"e744c971481bacdee0ab0b693d567ef1\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.875089 kubelet[2623]: I0317 17:57:27.875039 2623 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:27.875468 kubelet[2623]: E0317 17:57:27.875403 2623 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://37.27.0.76:6443/api/v1/nodes\": dial tcp 37.27.0.76:6443: connect: connection refused" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:28.032330 containerd[1500]: time="2025-03-17T17:57:28.032191283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-5-05efd5484b,Uid:8a8cbeac4acd1939a3f0aed9dfb3b5cf,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:28.047076 containerd[1500]: time="2025-03-17T17:57:28.046027724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-5-05efd5484b,Uid:e744c971481bacdee0ab0b693d567ef1,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:28.050584 containerd[1500]: time="2025-03-17T17:57:28.050304155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-5-05efd5484b,Uid:d2339a0ad5ff6ca620659cd5f0757167,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:28.176218 kubelet[2623]: E0317 17:57:28.176159 2623 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-5-05efd5484b?timeout=10s\": dial tcp 37.27.0.76:6443: connect: connection refused" interval="800ms" Mar 17 17:57:28.277431 kubelet[2623]: I0317 17:57:28.277377 2623 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:28.277989 kubelet[2623]: E0317 17:57:28.277698 2623 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://37.27.0.76:6443/api/v1/nodes\": dial tcp 37.27.0.76:6443: connect: connection refused" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:28.470968 kubelet[2623]: W0317 17:57:28.470808 2623 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:28.470968 kubelet[2623]: E0317 17:57:28.470872 2623 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://37.27.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:28.475749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780974027.mount: Deactivated successfully. Mar 17 17:57:28.484049 containerd[1500]: time="2025-03-17T17:57:28.483983550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:57:28.485595 containerd[1500]: time="2025-03-17T17:57:28.485553687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:57:28.486895 containerd[1500]: time="2025-03-17T17:57:28.486841093Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Mar 17 17:57:28.487651 containerd[1500]: time="2025-03-17T17:57:28.487590448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:57:28.489330 containerd[1500]: time="2025-03-17T17:57:28.489294702Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:57:28.490376 containerd[1500]: time="2025-03-17T17:57:28.490267405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:57:28.490488 containerd[1500]: time="2025-03-17T17:57:28.490450086Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:57:28.492538 containerd[1500]: time="2025-03-17T17:57:28.492480133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:57:28.495605 containerd[1500]: time="2025-03-17T17:57:28.495231653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 444.84473ms" Mar 17 17:57:28.497124 containerd[1500]: time="2025-03-17T17:57:28.497012354Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 450.871103ms" Mar 17 17:57:28.500699 containerd[1500]: time="2025-03-17T17:57:28.500613571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 466.317969ms" Mar 17 17:57:28.624627 containerd[1500]: time="2025-03-17T17:57:28.624547322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:57:28.625102 containerd[1500]: time="2025-03-17T17:57:28.624897492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:57:28.625102 containerd[1500]: time="2025-03-17T17:57:28.624916489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:28.625102 containerd[1500]: time="2025-03-17T17:57:28.625041027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:28.636889 containerd[1500]: time="2025-03-17T17:57:28.636583123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:57:28.636889 containerd[1500]: time="2025-03-17T17:57:28.636656073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:57:28.636889 containerd[1500]: time="2025-03-17T17:57:28.636676602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:28.638137 containerd[1500]: time="2025-03-17T17:57:28.638097363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:28.665086 systemd[1]: Started cri-containerd-4680052a9c0873804ea27734e152a21c355c0341bebea65eb77f8ec286dad27d.scope - libcontainer container 4680052a9c0873804ea27734e152a21c355c0341bebea65eb77f8ec286dad27d. Mar 17 17:57:28.665784 containerd[1500]: time="2025-03-17T17:57:28.664947249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:57:28.668141 containerd[1500]: time="2025-03-17T17:57:28.666823573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:57:28.668141 containerd[1500]: time="2025-03-17T17:57:28.666841517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:28.668141 containerd[1500]: time="2025-03-17T17:57:28.666954854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:28.686871 systemd[1]: Started cri-containerd-f36d90cecc52dc6873b648066bfd0ab045dff65d25fe7d773587adf091000576.scope - libcontainer container f36d90cecc52dc6873b648066bfd0ab045dff65d25fe7d773587adf091000576. Mar 17 17:57:28.703577 systemd[1]: Started cri-containerd-8d140408838995f5a2db482d1339af13170316f3ab9f9c3836cf80c280d70ef5.scope - libcontainer container 8d140408838995f5a2db482d1339af13170316f3ab9f9c3836cf80c280d70ef5. Mar 17 17:57:28.758078 containerd[1500]: time="2025-03-17T17:57:28.757015196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-5-05efd5484b,Uid:8a8cbeac4acd1939a3f0aed9dfb3b5cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"4680052a9c0873804ea27734e152a21c355c0341bebea65eb77f8ec286dad27d\"" Mar 17 17:57:28.764592 containerd[1500]: time="2025-03-17T17:57:28.764526603Z" level=info msg="CreateContainer within sandbox \"4680052a9c0873804ea27734e152a21c355c0341bebea65eb77f8ec286dad27d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:57:28.772200 containerd[1500]: time="2025-03-17T17:57:28.771532842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-5-05efd5484b,Uid:d2339a0ad5ff6ca620659cd5f0757167,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d140408838995f5a2db482d1339af13170316f3ab9f9c3836cf80c280d70ef5\"" Mar 17 17:57:28.772922 containerd[1500]: time="2025-03-17T17:57:28.772688646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-5-05efd5484b,Uid:e744c971481bacdee0ab0b693d567ef1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f36d90cecc52dc6873b648066bfd0ab045dff65d25fe7d773587adf091000576\"" Mar 17 17:57:28.774871 containerd[1500]: time="2025-03-17T17:57:28.774846429Z" level=info msg="CreateContainer within sandbox \"8d140408838995f5a2db482d1339af13170316f3ab9f9c3836cf80c280d70ef5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:57:28.776101 containerd[1500]: time="2025-03-17T17:57:28.776067588Z" level=info msg="CreateContainer within sandbox \"f36d90cecc52dc6873b648066bfd0ab045dff65d25fe7d773587adf091000576\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:57:28.794450 containerd[1500]: time="2025-03-17T17:57:28.794337280Z" level=info msg="CreateContainer within sandbox \"8d140408838995f5a2db482d1339af13170316f3ab9f9c3836cf80c280d70ef5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"854d72ccaca78e9e07581fdd1752e1c32e5b66b16393e0cebd2f2c08ac9e41e8\"" Mar 17 17:57:28.795452 containerd[1500]: time="2025-03-17T17:57:28.795388382Z" level=info msg="CreateContainer within sandbox \"4680052a9c0873804ea27734e152a21c355c0341bebea65eb77f8ec286dad27d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2412f9233db08d8826189d8ed43d4fa3c5d400420e8a9c76aab33d4df9ecdf66\"" Mar 17 17:57:28.795568 containerd[1500]: time="2025-03-17T17:57:28.795534583Z" level=info msg="StartContainer for \"854d72ccaca78e9e07581fdd1752e1c32e5b66b16393e0cebd2f2c08ac9e41e8\"" Mar 17 17:57:28.796270 containerd[1500]: time="2025-03-17T17:57:28.796142076Z" level=info msg="CreateContainer within sandbox \"f36d90cecc52dc6873b648066bfd0ab045dff65d25fe7d773587adf091000576\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"39c77818fc2a1cc89a1cb4df3227e6b119b581afab29cc9345f4b9ae646ffba4\"" Mar 17 17:57:28.796630 containerd[1500]: time="2025-03-17T17:57:28.796611886Z" level=info msg="StartContainer for \"2412f9233db08d8826189d8ed43d4fa3c5d400420e8a9c76aab33d4df9ecdf66\"" Mar 17 17:57:28.798997 containerd[1500]: time="2025-03-17T17:57:28.798974762Z" level=info msg="StartContainer for \"39c77818fc2a1cc89a1cb4df3227e6b119b581afab29cc9345f4b9ae646ffba4\"" Mar 17 17:57:28.831564 systemd[1]: Started cri-containerd-854d72ccaca78e9e07581fdd1752e1c32e5b66b16393e0cebd2f2c08ac9e41e8.scope - libcontainer container 854d72ccaca78e9e07581fdd1752e1c32e5b66b16393e0cebd2f2c08ac9e41e8. Mar 17 17:57:28.836710 systemd[1]: Started cri-containerd-2412f9233db08d8826189d8ed43d4fa3c5d400420e8a9c76aab33d4df9ecdf66.scope - libcontainer container 2412f9233db08d8826189d8ed43d4fa3c5d400420e8a9c76aab33d4df9ecdf66. Mar 17 17:57:28.854404 kubelet[2623]: W0317 17:57:28.853819 2623 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:28.854404 kubelet[2623]: E0317 17:57:28.853852 2623 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://37.27.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:28.853937 systemd[1]: Started cri-containerd-39c77818fc2a1cc89a1cb4df3227e6b119b581afab29cc9345f4b9ae646ffba4.scope - libcontainer container 39c77818fc2a1cc89a1cb4df3227e6b119b581afab29cc9345f4b9ae646ffba4. Mar 17 17:57:28.897676 containerd[1500]: time="2025-03-17T17:57:28.897552777Z" level=info msg="StartContainer for \"2412f9233db08d8826189d8ed43d4fa3c5d400420e8a9c76aab33d4df9ecdf66\" returns successfully" Mar 17 17:57:28.921371 containerd[1500]: time="2025-03-17T17:57:28.921322403Z" level=info msg="StartContainer for \"39c77818fc2a1cc89a1cb4df3227e6b119b581afab29cc9345f4b9ae646ffba4\" returns successfully" Mar 17 17:57:28.924865 containerd[1500]: time="2025-03-17T17:57:28.924773634Z" level=info msg="StartContainer for \"854d72ccaca78e9e07581fdd1752e1c32e5b66b16393e0cebd2f2c08ac9e41e8\" returns successfully" Mar 17 17:57:28.977016 kubelet[2623]: E0317 17:57:28.976948 2623 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-5-05efd5484b?timeout=10s\": dial tcp 37.27.0.76:6443: connect: connection refused" interval="1.6s" Mar 17 17:57:29.013001 kubelet[2623]: W0317 17:57:29.012848 2623 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-5-05efd5484b&limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:29.013001 kubelet[2623]: E0317 17:57:29.012934 2623 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://37.27.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-5-05efd5484b&limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:29.080195 kubelet[2623]: I0317 17:57:29.080154 2623 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:29.082438 kubelet[2623]: E0317 17:57:29.082386 2623 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://37.27.0.76:6443/api/v1/nodes\": dial tcp 37.27.0.76:6443: connect: connection refused" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:29.132909 kubelet[2623]: W0317 17:57:29.132858 2623 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:29.132909 kubelet[2623]: E0317 17:57:29.132903 2623 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://37.27.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.0.76:6443: connect: connection refused Mar 17 17:57:30.318566 sshd[2631]: Invalid user test1 from 220.81.148.101 port 58584 Mar 17 17:57:30.579805 kubelet[2623]: E0317 17:57:30.579747 2623 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-2-5-05efd5484b\" not found" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:30.685376 kubelet[2623]: I0317 17:57:30.685340 2623 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:30.700883 kubelet[2623]: I0317 17:57:30.700823 2623 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:30.712210 kubelet[2623]: E0317 17:57:30.712146 2623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-5-05efd5484b\" not found" Mar 17 17:57:30.812773 kubelet[2623]: E0317 17:57:30.812713 2623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-5-05efd5484b\" not found" Mar 17 17:57:30.913980 kubelet[2623]: E0317 17:57:30.913862 2623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-5-05efd5484b\" not found" Mar 17 17:57:31.014479 kubelet[2623]: E0317 17:57:31.014373 2623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-5-05efd5484b\" not found" Mar 17 17:57:31.115403 kubelet[2623]: E0317 17:57:31.115349 2623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-5-05efd5484b\" not found" Mar 17 17:57:31.216073 kubelet[2623]: E0317 17:57:31.215952 2623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-5-05efd5484b\" not found" Mar 17 17:57:31.316661 kubelet[2623]: E0317 17:57:31.316599 2623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-5-05efd5484b\" not found" Mar 17 17:57:31.417232 kubelet[2623]: E0317 17:57:31.417182 2623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-5-05efd5484b\" not found" Mar 17 17:57:31.518750 kubelet[2623]: E0317 17:57:31.518609 2623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-5-05efd5484b\" not found" Mar 17 17:57:31.619052 kubelet[2623]: E0317 17:57:31.619003 2623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-5-05efd5484b\" not found" Mar 17 17:57:31.857168 sshd[2631]: maximum authentication attempts exceeded for invalid user test1 from 220.81.148.101 port 58584 ssh2 [preauth] Mar 17 17:57:31.857168 sshd[2631]: Disconnecting invalid user test1 220.81.148.101 port 58584: Too many authentication failures [preauth] Mar 17 17:57:31.860387 systemd[1]: sshd@29-37.27.0.76:22-220.81.148.101:58584.service: Deactivated successfully. Mar 17 17:57:32.469657 systemd[1]: Started sshd@30-37.27.0.76:22-220.81.148.101:59398.service - OpenSSH per-connection server daemon (220.81.148.101:59398). Mar 17 17:57:32.551174 kubelet[2623]: I0317 17:57:32.551142 2623 apiserver.go:52] "Watching apiserver" Mar 17 17:57:32.572433 kubelet[2623]: I0317 17:57:32.572358 2623 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:57:32.620195 systemd[1]: Reloading requested from client PID 2904 ('systemctl') (unit session-7.scope)... Mar 17 17:57:32.620213 systemd[1]: Reloading... Mar 17 17:57:32.727497 zram_generator::config[2946]: No configuration found. Mar 17 17:57:32.863438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:57:32.965018 systemd[1]: Reloading finished in 344 ms. Mar 17 17:57:33.012993 kubelet[2623]: E0317 17:57:33.012744 2623 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4152-2-2-5-05efd5484b.182da8d140e8d3f3 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-2-5-05efd5484b,UID:ci-4152-2-2-5-05efd5484b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-5-05efd5484b,},FirstTimestamp:2025-03-17 17:57:27.552881651 +0000 UTC m=+0.360551713,LastTimestamp:2025-03-17 17:57:27.552881651 +0000 UTC m=+0.360551713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-5-05efd5484b,}" Mar 17 17:57:33.013113 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:33.029190 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:57:33.029588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:33.035639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:57:33.203836 (kubelet)[2997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:57:33.204889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:57:33.254277 kubelet[2997]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:57:33.254277 kubelet[2997]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:57:33.254277 kubelet[2997]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:57:33.256986 kubelet[2997]: I0317 17:57:33.255576 2997 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:57:33.262378 kubelet[2997]: I0317 17:57:33.262018 2997 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:57:33.262378 kubelet[2997]: I0317 17:57:33.262045 2997 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:57:33.262378 kubelet[2997]: I0317 17:57:33.262317 2997 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:57:33.265731 kubelet[2997]: I0317 17:57:33.264451 2997 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:57:33.267636 kubelet[2997]: I0317 17:57:33.267616 2997 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:57:33.276129 kubelet[2997]: I0317 17:57:33.276100 2997 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:57:33.277683 kubelet[2997]: I0317 17:57:33.277650 2997 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:57:33.277907 kubelet[2997]: I0317 17:57:33.277752 2997 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-5-05efd5484b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:57:33.278041 kubelet[2997]: I0317 17:57:33.278030 2997 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:57:33.278091 kubelet[2997]: I0317 17:57:33.278083 2997 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:57:33.278178 kubelet[2997]: I0317 17:57:33.278169 2997 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:57:33.278325 kubelet[2997]: I0317 17:57:33.278315 2997 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:57:33.278382 kubelet[2997]: I0317 17:57:33.278372 2997 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:57:33.278460 kubelet[2997]: I0317 17:57:33.278451 2997 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:57:33.278527 kubelet[2997]: I0317 17:57:33.278518 2997 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:57:33.283051 kubelet[2997]: I0317 17:57:33.283035 2997 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:57:33.283267 kubelet[2997]: I0317 17:57:33.283255 2997 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:57:33.283686 kubelet[2997]: I0317 17:57:33.283674 2997 server.go:1264] "Started kubelet" Mar 17 17:57:33.285195 kubelet[2997]: I0317 17:57:33.285181 2997 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:57:33.291223 kubelet[2997]: I0317 17:57:33.291198 2997 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:57:33.292071 kubelet[2997]: I0317 17:57:33.292056 2997 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:57:33.296463 kubelet[2997]: I0317 17:57:33.296318 2997 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:57:33.296719 kubelet[2997]: I0317 17:57:33.296705 2997 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:57:33.297621 kubelet[2997]: I0317 17:57:33.297593 2997 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:57:33.301786 kubelet[2997]: I0317 17:57:33.301344 2997 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:57:33.302807 kubelet[2997]: I0317 17:57:33.302786 2997 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:57:33.307606 kubelet[2997]: I0317 17:57:33.307567 2997 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:57:33.315686 kubelet[2997]: I0317 17:57:33.315644 2997 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:57:33.315686 kubelet[2997]: I0317 17:57:33.315690 2997 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:57:33.315781 kubelet[2997]: I0317 17:57:33.315711 2997 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:57:33.315781 kubelet[2997]: E0317 17:57:33.315761 2997 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:57:33.320839 kubelet[2997]: I0317 17:57:33.320617 2997 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:57:33.320839 kubelet[2997]: I0317 17:57:33.320634 2997 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:57:33.320839 kubelet[2997]: I0317 17:57:33.320743 2997 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:57:33.328778 kubelet[2997]: E0317 17:57:33.328731 2997 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:57:33.366052 kubelet[2997]: I0317 17:57:33.366030 2997 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:57:33.366052 kubelet[2997]: I0317 17:57:33.366067 2997 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:57:33.366052 kubelet[2997]: I0317 17:57:33.366084 2997 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:57:33.366653 kubelet[2997]: I0317 17:57:33.366553 2997 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:57:33.366653 kubelet[2997]: I0317 17:57:33.366566 2997 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:57:33.366653 kubelet[2997]: I0317 17:57:33.366596 2997 policy_none.go:49] "None policy: Start" Mar 17 17:57:33.367453 kubelet[2997]: I0317 17:57:33.367138 2997 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:57:33.367453 kubelet[2997]: I0317 17:57:33.367156 2997 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:57:33.367453 kubelet[2997]: I0317 17:57:33.367259 2997 state_mem.go:75] "Updated machine memory state" Mar 17 17:57:33.371983 kubelet[2997]: I0317 17:57:33.371957 2997 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:57:33.372212 kubelet[2997]: I0317 17:57:33.372167 2997 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:57:33.372285 kubelet[2997]: I0317 17:57:33.372261 2997 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:57:33.403395 kubelet[2997]: I0317 17:57:33.403346 2997 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.409775 kubelet[2997]: I0317 17:57:33.409571 2997 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.409775 kubelet[2997]: I0317 17:57:33.409653 2997 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.416053 kubelet[2997]: I0317 17:57:33.415999 2997 topology_manager.go:215] "Topology Admit Handler" podUID="8a8cbeac4acd1939a3f0aed9dfb3b5cf" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.416209 kubelet[2997]: I0317 17:57:33.416096 2997 topology_manager.go:215] "Topology Admit Handler" podUID="e744c971481bacdee0ab0b693d567ef1" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.416209 kubelet[2997]: I0317 17:57:33.416145 2997 topology_manager.go:215] "Topology Admit Handler" podUID="d2339a0ad5ff6ca620659cd5f0757167" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.504029 kubelet[2997]: I0317 17:57:33.503834 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2339a0ad5ff6ca620659cd5f0757167-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-5-05efd5484b\" (UID: \"d2339a0ad5ff6ca620659cd5f0757167\") " pod="kube-system/kube-scheduler-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.504029 kubelet[2997]: I0317 17:57:33.503876 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a8cbeac4acd1939a3f0aed9dfb3b5cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-5-05efd5484b\" (UID: \"8a8cbeac4acd1939a3f0aed9dfb3b5cf\") " pod="kube-system/kube-apiserver-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.504029 kubelet[2997]: I0317 17:57:33.503895 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e744c971481bacdee0ab0b693d567ef1-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-5-05efd5484b\" (UID: \"e744c971481bacdee0ab0b693d567ef1\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.504029 kubelet[2997]: I0317 17:57:33.503914 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e744c971481bacdee0ab0b693d567ef1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-5-05efd5484b\" (UID: \"e744c971481bacdee0ab0b693d567ef1\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.504029 kubelet[2997]: I0317 17:57:33.503932 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e744c971481bacdee0ab0b693d567ef1-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-5-05efd5484b\" (UID: \"e744c971481bacdee0ab0b693d567ef1\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.504268 kubelet[2997]: I0317 17:57:33.503947 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e744c971481bacdee0ab0b693d567ef1-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-5-05efd5484b\" (UID: \"e744c971481bacdee0ab0b693d567ef1\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.504268 kubelet[2997]: I0317 17:57:33.503961 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a8cbeac4acd1939a3f0aed9dfb3b5cf-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-5-05efd5484b\" (UID: \"8a8cbeac4acd1939a3f0aed9dfb3b5cf\") " pod="kube-system/kube-apiserver-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.504268 kubelet[2997]: I0317 17:57:33.503975 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a8cbeac4acd1939a3f0aed9dfb3b5cf-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-5-05efd5484b\" (UID: \"8a8cbeac4acd1939a3f0aed9dfb3b5cf\") " pod="kube-system/kube-apiserver-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.504268 kubelet[2997]: I0317 17:57:33.503988 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e744c971481bacdee0ab0b693d567ef1-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-5-05efd5484b\" (UID: \"e744c971481bacdee0ab0b693d567ef1\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:33.637681 sudo[3030]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:57:33.638047 sudo[3030]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:57:34.177569 sudo[3030]: pam_unix(sudo:session): session closed for user root Mar 17 17:57:34.283418 kubelet[2997]: I0317 17:57:34.283371 2997 apiserver.go:52] "Watching apiserver" Mar 17 17:57:34.302517 kubelet[2997]: I0317 17:57:34.302450 2997 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:57:34.355754 kubelet[2997]: E0317 17:57:34.355713 2997 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-2-5-05efd5484b\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-2-5-05efd5484b" Mar 17 17:57:34.371002 kubelet[2997]: I0317 17:57:34.370940 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-2-5-05efd5484b" podStartSLOduration=1.370921969 podStartE2EDuration="1.370921969s" podCreationTimestamp="2025-03-17 17:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:57:34.370302626 +0000 UTC m=+1.160392783" watchObservedRunningTime="2025-03-17 17:57:34.370921969 +0000 UTC m=+1.161012126" Mar 17 17:57:34.386161 kubelet[2997]: I0317 17:57:34.386113 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-2-5-05efd5484b" podStartSLOduration=1.386094143 podStartE2EDuration="1.386094143s" podCreationTimestamp="2025-03-17 17:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:57:34.378616452 +0000 UTC m=+1.168706610" watchObservedRunningTime="2025-03-17 17:57:34.386094143 +0000 UTC m=+1.176184300" Mar 17 17:57:35.693292 sshd[2901]: Invalid user test1 from 220.81.148.101 port 59398 Mar 17 17:57:35.708181 sudo[2005]: pam_unix(sudo:session): session closed for user root Mar 17 17:57:35.866747 sshd[2004]: Connection closed by 139.178.68.195 port 50466 Mar 17 17:57:35.868272 sshd-session[2002]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:35.872060 systemd[1]: sshd@25-37.27.0.76:22-139.178.68.195:50466.service: Deactivated successfully. Mar 17 17:57:35.874172 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:57:35.874697 systemd[1]: session-7.scope: Consumed 4.764s CPU time, 186.1M memory peak, 0B memory swap peak. Mar 17 17:57:35.875286 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:57:35.876188 systemd-logind[1487]: Removed session 7. Mar 17 17:57:37.218491 sshd[2901]: maximum authentication attempts exceeded for invalid user test1 from 220.81.148.101 port 59398 ssh2 [preauth] Mar 17 17:57:37.218491 sshd[2901]: Disconnecting invalid user test1 220.81.148.101 port 59398: Too many authentication failures [preauth] Mar 17 17:57:37.221423 systemd[1]: sshd@30-37.27.0.76:22-220.81.148.101:59398.service: Deactivated successfully. Mar 17 17:57:37.856658 systemd[1]: Started sshd@31-37.27.0.76:22-220.81.148.101:60124.service - OpenSSH per-connection server daemon (220.81.148.101:60124). Mar 17 17:57:38.869013 kubelet[2997]: I0317 17:57:38.868870 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-2-5-05efd5484b" podStartSLOduration=5.868855508 podStartE2EDuration="5.868855508s" podCreationTimestamp="2025-03-17 17:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:57:34.386303793 +0000 UTC m=+1.176393950" watchObservedRunningTime="2025-03-17 17:57:38.868855508 +0000 UTC m=+5.658945666" Mar 17 17:57:41.340832 sshd[3072]: Invalid user test1 from 220.81.148.101 port 60124 Mar 17 17:57:41.978148 sshd[3072]: Received disconnect from 220.81.148.101 port 60124:11: disconnected by user [preauth] Mar 17 17:57:41.978148 sshd[3072]: Disconnected from invalid user test1 220.81.148.101 port 60124 [preauth] Mar 17 17:57:41.980257 systemd[1]: sshd@31-37.27.0.76:22-220.81.148.101:60124.service: Deactivated successfully. Mar 17 17:57:42.282722 systemd[1]: Started sshd@32-37.27.0.76:22-220.81.148.101:60762.service - OpenSSH per-connection server daemon (220.81.148.101:60762). Mar 17 17:57:45.154923 sshd[3077]: Invalid user test2 from 220.81.148.101 port 60762 Mar 17 17:57:46.706568 sshd[3077]: maximum authentication attempts exceeded for invalid user test2 from 220.81.148.101 port 60762 ssh2 [preauth] Mar 17 17:57:46.706568 sshd[3077]: Disconnecting invalid user test2 220.81.148.101 port 60762: Too many authentication failures [preauth] Mar 17 17:57:46.709355 systemd[1]: sshd@32-37.27.0.76:22-220.81.148.101:60762.service: Deactivated successfully. Mar 17 17:57:47.342476 systemd[1]: Started sshd@33-37.27.0.76:22-220.81.148.101:33284.service - OpenSSH per-connection server daemon (220.81.148.101:33284). Mar 17 17:57:47.947984 kubelet[2997]: I0317 17:57:47.947945 2997 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:57:47.948940 kubelet[2997]: I0317 17:57:47.948861 2997 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:57:47.949005 containerd[1500]: time="2025-03-17T17:57:47.948592100Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:57:48.800333 kubelet[2997]: I0317 17:57:48.799102 2997 topology_manager.go:215] "Topology Admit Handler" podUID="4b71e734-ba40-479d-8236-2c82f8e70f24" podNamespace="kube-system" podName="cilium-z86mm" Mar 17 17:57:48.800803 kubelet[2997]: I0317 17:57:48.800783 2997 topology_manager.go:215] "Topology Admit Handler" podUID="9e48bd36-156d-4514-8b33-e8d5fa5d886a" podNamespace="kube-system" podName="kube-proxy-kqzft" Mar 17 17:57:48.812047 systemd[1]: Created slice kubepods-besteffort-pod9e48bd36_156d_4514_8b33_e8d5fa5d886a.slice - libcontainer container kubepods-besteffort-pod9e48bd36_156d_4514_8b33_e8d5fa5d886a.slice. Mar 17 17:57:48.818755 systemd[1]: Created slice kubepods-burstable-pod4b71e734_ba40_479d_8236_2c82f8e70f24.slice - libcontainer container kubepods-burstable-pod4b71e734_ba40_479d_8236_2c82f8e70f24.slice. Mar 17 17:57:48.901730 kubelet[2997]: I0317 17:57:48.901688 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-run\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.901730 kubelet[2997]: I0317 17:57:48.901724 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-lib-modules\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902006 kubelet[2997]: I0317 17:57:48.901745 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-xtables-lock\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902033 kubelet[2997]: I0317 17:57:48.902005 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e48bd36-156d-4514-8b33-e8d5fa5d886a-xtables-lock\") pod \"kube-proxy-kqzft\" (UID: \"9e48bd36-156d-4514-8b33-e8d5fa5d886a\") " pod="kube-system/kube-proxy-kqzft" Mar 17 17:57:48.902033 kubelet[2997]: I0317 17:57:48.902020 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c59mj\" (UniqueName: \"kubernetes.io/projected/9e48bd36-156d-4514-8b33-e8d5fa5d886a-kube-api-access-c59mj\") pod \"kube-proxy-kqzft\" (UID: \"9e48bd36-156d-4514-8b33-e8d5fa5d886a\") " pod="kube-system/kube-proxy-kqzft" Mar 17 17:57:48.902090 kubelet[2997]: I0317 17:57:48.902038 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-hostproc\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902090 kubelet[2997]: I0317 17:57:48.902051 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-config-path\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902090 kubelet[2997]: I0317 17:57:48.902064 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e48bd36-156d-4514-8b33-e8d5fa5d886a-kube-proxy\") pod \"kube-proxy-kqzft\" (UID: \"9e48bd36-156d-4514-8b33-e8d5fa5d886a\") " pod="kube-system/kube-proxy-kqzft" Mar 17 17:57:48.902090 kubelet[2997]: I0317 17:57:48.902077 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-etc-cni-netd\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902090 kubelet[2997]: I0317 17:57:48.902090 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e48bd36-156d-4514-8b33-e8d5fa5d886a-lib-modules\") pod \"kube-proxy-kqzft\" (UID: \"9e48bd36-156d-4514-8b33-e8d5fa5d886a\") " pod="kube-system/kube-proxy-kqzft" Mar 17 17:57:48.902201 kubelet[2997]: I0317 17:57:48.902103 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b71e734-ba40-479d-8236-2c82f8e70f24-clustermesh-secrets\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902201 kubelet[2997]: I0317 17:57:48.902116 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-host-proc-sys-kernel\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902201 kubelet[2997]: I0317 17:57:48.902129 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-bpf-maps\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902201 kubelet[2997]: I0317 17:57:48.902142 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cni-path\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902201 kubelet[2997]: I0317 17:57:48.902157 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-host-proc-sys-net\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902201 kubelet[2997]: I0317 17:57:48.902170 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mql8\" (UniqueName: \"kubernetes.io/projected/4b71e734-ba40-479d-8236-2c82f8e70f24-kube-api-access-7mql8\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902336 kubelet[2997]: I0317 17:57:48.902185 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-cgroup\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:48.902336 kubelet[2997]: I0317 17:57:48.902199 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b71e734-ba40-479d-8236-2c82f8e70f24-hubble-tls\") pod \"cilium-z86mm\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " pod="kube-system/cilium-z86mm" Mar 17 17:57:49.053962 kubelet[2997]: I0317 17:57:49.053772 2997 topology_manager.go:215] "Topology Admit Handler" podUID="f55707a4-7c0b-4f07-ab76-7a757044df26" podNamespace="kube-system" podName="cilium-operator-599987898-l8v5f" Mar 17 17:57:49.066965 systemd[1]: Created slice kubepods-besteffort-podf55707a4_7c0b_4f07_ab76_7a757044df26.slice - libcontainer container kubepods-besteffort-podf55707a4_7c0b_4f07_ab76_7a757044df26.slice. Mar 17 17:57:49.109233 kubelet[2997]: I0317 17:57:49.109138 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f55707a4-7c0b-4f07-ab76-7a757044df26-cilium-config-path\") pod \"cilium-operator-599987898-l8v5f\" (UID: \"f55707a4-7c0b-4f07-ab76-7a757044df26\") " pod="kube-system/cilium-operator-599987898-l8v5f" Mar 17 17:57:49.109233 kubelet[2997]: I0317 17:57:49.109180 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4kp\" (UniqueName: \"kubernetes.io/projected/f55707a4-7c0b-4f07-ab76-7a757044df26-kube-api-access-rm4kp\") pod \"cilium-operator-599987898-l8v5f\" (UID: \"f55707a4-7c0b-4f07-ab76-7a757044df26\") " pod="kube-system/cilium-operator-599987898-l8v5f" Mar 17 17:57:49.127256 containerd[1500]: time="2025-03-17T17:57:49.127168253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kqzft,Uid:9e48bd36-156d-4514-8b33-e8d5fa5d886a,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:49.127658 containerd[1500]: time="2025-03-17T17:57:49.127168262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z86mm,Uid:4b71e734-ba40-479d-8236-2c82f8e70f24,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:49.161151 containerd[1500]: time="2025-03-17T17:57:49.160935563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:57:49.161151 containerd[1500]: time="2025-03-17T17:57:49.161140182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:57:49.161570 containerd[1500]: time="2025-03-17T17:57:49.161179437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:49.161642 containerd[1500]: time="2025-03-17T17:57:49.161568856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:49.164059 containerd[1500]: time="2025-03-17T17:57:49.163591539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:57:49.164059 containerd[1500]: time="2025-03-17T17:57:49.163651613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:57:49.164059 containerd[1500]: time="2025-03-17T17:57:49.163662593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:49.164059 containerd[1500]: time="2025-03-17T17:57:49.163847915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:49.185847 systemd[1]: Started cri-containerd-dc0fc1949e352c0caca41ca655d6262b15503a62e7a3f92ddc5642aad0b8a0c2.scope - libcontainer container dc0fc1949e352c0caca41ca655d6262b15503a62e7a3f92ddc5642aad0b8a0c2. Mar 17 17:57:49.197554 systemd[1]: Started cri-containerd-1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d.scope - libcontainer container 1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d. Mar 17 17:57:49.232594 containerd[1500]: time="2025-03-17T17:57:49.232557429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kqzft,Uid:9e48bd36-156d-4514-8b33-e8d5fa5d886a,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc0fc1949e352c0caca41ca655d6262b15503a62e7a3f92ddc5642aad0b8a0c2\"" Mar 17 17:57:49.238456 containerd[1500]: time="2025-03-17T17:57:49.238405869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z86mm,Uid:4b71e734-ba40-479d-8236-2c82f8e70f24,Namespace:kube-system,Attempt:0,} returns sandbox id \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\"" Mar 17 17:57:49.240571 containerd[1500]: time="2025-03-17T17:57:49.239632589Z" level=info msg="CreateContainer within sandbox \"dc0fc1949e352c0caca41ca655d6262b15503a62e7a3f92ddc5642aad0b8a0c2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:57:49.242045 containerd[1500]: time="2025-03-17T17:57:49.242005576Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:57:49.254747 containerd[1500]: time="2025-03-17T17:57:49.254664442Z" level=info msg="CreateContainer within sandbox \"dc0fc1949e352c0caca41ca655d6262b15503a62e7a3f92ddc5642aad0b8a0c2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7ed9e4584761b375b458131130550365f7e41a96a6b3c2d10340bb46cd0efd18\"" Mar 17 17:57:49.255125 containerd[1500]: time="2025-03-17T17:57:49.255109037Z" level=info msg="StartContainer for \"7ed9e4584761b375b458131130550365f7e41a96a6b3c2d10340bb46cd0efd18\"" Mar 17 17:57:49.281602 systemd[1]: Started cri-containerd-7ed9e4584761b375b458131130550365f7e41a96a6b3c2d10340bb46cd0efd18.scope - libcontainer container 7ed9e4584761b375b458131130550365f7e41a96a6b3c2d10340bb46cd0efd18. Mar 17 17:57:49.314888 containerd[1500]: time="2025-03-17T17:57:49.314797055Z" level=info msg="StartContainer for \"7ed9e4584761b375b458131130550365f7e41a96a6b3c2d10340bb46cd0efd18\" returns successfully" Mar 17 17:57:49.371447 containerd[1500]: time="2025-03-17T17:57:49.371377129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l8v5f,Uid:f55707a4-7c0b-4f07-ab76-7a757044df26,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:49.400327 containerd[1500]: time="2025-03-17T17:57:49.399557107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:57:49.400327 containerd[1500]: time="2025-03-17T17:57:49.399620297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:57:49.400327 containerd[1500]: time="2025-03-17T17:57:49.399634254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:49.400327 containerd[1500]: time="2025-03-17T17:57:49.400272937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:49.417543 systemd[1]: Started cri-containerd-fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4.scope - libcontainer container fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4. Mar 17 17:57:49.466566 containerd[1500]: time="2025-03-17T17:57:49.466533570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l8v5f,Uid:f55707a4-7c0b-4f07-ab76-7a757044df26,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\"" Mar 17 17:57:51.508203 sshd[3082]: Invalid user test2 from 220.81.148.101 port 33284 Mar 17 17:57:53.109006 sshd[3082]: maximum authentication attempts exceeded for invalid user test2 from 220.81.148.101 port 33284 ssh2 [preauth] Mar 17 17:57:53.109006 sshd[3082]: Disconnecting invalid user test2 220.81.148.101 port 33284: Too many authentication failures [preauth] Mar 17 17:57:53.115310 systemd[1]: sshd@33-37.27.0.76:22-220.81.148.101:33284.service: Deactivated successfully. Mar 17 17:57:53.346214 kubelet[2997]: I0317 17:57:53.344030 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kqzft" podStartSLOduration=5.344012812 podStartE2EDuration="5.344012812s" podCreationTimestamp="2025-03-17 17:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:57:49.379444142 +0000 UTC m=+16.169534300" watchObservedRunningTime="2025-03-17 17:57:53.344012812 +0000 UTC m=+20.134102969" Mar 17 17:57:53.750771 systemd[1]: Started sshd@34-37.27.0.76:22-220.81.148.101:34192.service - OpenSSH per-connection server daemon (220.81.148.101:34192). Mar 17 17:57:54.794053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766453807.mount: Deactivated successfully. Mar 17 17:57:56.539472 containerd[1500]: time="2025-03-17T17:57:56.539394205Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:56.540716 containerd[1500]: time="2025-03-17T17:57:56.540621371Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 17 17:57:56.542434 containerd[1500]: time="2025-03-17T17:57:56.541639693Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:56.543272 containerd[1500]: time="2025-03-17T17:57:56.543159605Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.301125975s" Mar 17 17:57:56.543272 containerd[1500]: time="2025-03-17T17:57:56.543188259Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 17:57:56.544813 containerd[1500]: time="2025-03-17T17:57:56.544589857Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:57:56.556229 containerd[1500]: time="2025-03-17T17:57:56.556185585Z" level=info msg="CreateContainer within sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:57:56.648879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3922346695.mount: Deactivated successfully. Mar 17 17:57:56.675394 containerd[1500]: time="2025-03-17T17:57:56.675338742Z" level=info msg="CreateContainer within sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9\"" Mar 17 17:57:56.681215 containerd[1500]: time="2025-03-17T17:57:56.681178455Z" level=info msg="StartContainer for \"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9\"" Mar 17 17:57:56.902582 systemd[1]: Started cri-containerd-29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9.scope - libcontainer container 29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9. Mar 17 17:57:56.930646 containerd[1500]: time="2025-03-17T17:57:56.930613160Z" level=info msg="StartContainer for \"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9\" returns successfully" Mar 17 17:57:56.943759 systemd[1]: cri-containerd-29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9.scope: Deactivated successfully. Mar 17 17:57:57.078739 containerd[1500]: time="2025-03-17T17:57:57.078631529Z" level=info msg="shim disconnected" id=29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9 namespace=k8s.io Mar 17 17:57:57.078739 containerd[1500]: time="2025-03-17T17:57:57.078721790Z" level=warning msg="cleaning up after shim disconnected" id=29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9 namespace=k8s.io Mar 17 17:57:57.078739 containerd[1500]: time="2025-03-17T17:57:57.078731119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:57.405888 containerd[1500]: time="2025-03-17T17:57:57.405792072Z" level=info msg="CreateContainer within sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:57:57.417357 containerd[1500]: time="2025-03-17T17:57:57.417102443Z" level=info msg="CreateContainer within sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9\"" Mar 17 17:57:57.417790 containerd[1500]: time="2025-03-17T17:57:57.417772073Z" level=info msg="StartContainer for \"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9\"" Mar 17 17:57:57.445550 systemd[1]: Started cri-containerd-562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9.scope - libcontainer container 562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9. Mar 17 17:57:57.473240 containerd[1500]: time="2025-03-17T17:57:57.473125550Z" level=info msg="StartContainer for \"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9\" returns successfully" Mar 17 17:57:57.487455 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:57:57.488263 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:57:57.488524 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:57:57.495989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:57:57.496200 systemd[1]: cri-containerd-562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9.scope: Deactivated successfully. Mar 17 17:57:57.519929 containerd[1500]: time="2025-03-17T17:57:57.519731596Z" level=info msg="shim disconnected" id=562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9 namespace=k8s.io Mar 17 17:57:57.519929 containerd[1500]: time="2025-03-17T17:57:57.519783944Z" level=warning msg="cleaning up after shim disconnected" id=562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9 namespace=k8s.io Mar 17 17:57:57.519929 containerd[1500]: time="2025-03-17T17:57:57.519793943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:57.535615 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:57:57.636913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9-rootfs.mount: Deactivated successfully. Mar 17 17:57:57.771074 sshd[3375]: Invalid user test2 from 220.81.148.101 port 34192 Mar 17 17:57:58.400618 containerd[1500]: time="2025-03-17T17:57:58.400402428Z" level=info msg="CreateContainer within sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:57:58.408602 sshd[3375]: Received disconnect from 220.81.148.101 port 34192:11: disconnected by user [preauth] Mar 17 17:57:58.408952 sshd[3375]: Disconnected from invalid user test2 220.81.148.101 port 34192 [preauth] Mar 17 17:57:58.412865 systemd[1]: sshd@34-37.27.0.76:22-220.81.148.101:34192.service: Deactivated successfully. Mar 17 17:57:58.436558 containerd[1500]: time="2025-03-17T17:57:58.436520414Z" level=info msg="CreateContainer within sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f\"" Mar 17 17:57:58.440127 containerd[1500]: time="2025-03-17T17:57:58.437685974Z" level=info msg="StartContainer for \"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f\"" Mar 17 17:57:58.515556 systemd[1]: Started cri-containerd-6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f.scope - libcontainer container 6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f. Mar 17 17:57:58.549083 containerd[1500]: time="2025-03-17T17:57:58.549044429Z" level=info msg="StartContainer for \"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f\" returns successfully" Mar 17 17:57:58.554139 systemd[1]: cri-containerd-6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f.scope: Deactivated successfully. Mar 17 17:57:58.578353 containerd[1500]: time="2025-03-17T17:57:58.578286433Z" level=info msg="shim disconnected" id=6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f namespace=k8s.io Mar 17 17:57:58.578353 containerd[1500]: time="2025-03-17T17:57:58.578334074Z" level=warning msg="cleaning up after shim disconnected" id=6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f namespace=k8s.io Mar 17 17:57:58.578353 containerd[1500]: time="2025-03-17T17:57:58.578342079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:58.636391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f-rootfs.mount: Deactivated successfully. Mar 17 17:57:58.700212 systemd[1]: Started sshd@35-37.27.0.76:22-220.81.148.101:34948.service - OpenSSH per-connection server daemon (220.81.148.101:34948). Mar 17 17:57:59.271629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3828553988.mount: Deactivated successfully. Mar 17 17:57:59.406400 containerd[1500]: time="2025-03-17T17:57:59.406200139Z" level=info msg="CreateContainer within sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:57:59.419695 containerd[1500]: time="2025-03-17T17:57:59.419620470Z" level=info msg="CreateContainer within sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d\"" Mar 17 17:57:59.421067 containerd[1500]: time="2025-03-17T17:57:59.420212121Z" level=info msg="StartContainer for \"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d\"" Mar 17 17:57:59.453564 systemd[1]: Started cri-containerd-563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d.scope - libcontainer container 563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d. Mar 17 17:57:59.479306 systemd[1]: cri-containerd-563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d.scope: Deactivated successfully. Mar 17 17:57:59.489643 containerd[1500]: time="2025-03-17T17:57:59.489603807Z" level=info msg="StartContainer for \"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d\" returns successfully" Mar 17 17:57:59.514021 containerd[1500]: time="2025-03-17T17:57:59.513957427Z" level=info msg="shim disconnected" id=563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d namespace=k8s.io Mar 17 17:57:59.514277 containerd[1500]: time="2025-03-17T17:57:59.514234072Z" level=warning msg="cleaning up after shim disconnected" id=563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d namespace=k8s.io Mar 17 17:57:59.514277 containerd[1500]: time="2025-03-17T17:57:59.514248238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:59.636537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d-rootfs.mount: Deactivated successfully. Mar 17 17:58:00.413061 containerd[1500]: time="2025-03-17T17:58:00.412252932Z" level=info msg="CreateContainer within sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:58:00.437851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64685751.mount: Deactivated successfully. Mar 17 17:58:00.443867 containerd[1500]: time="2025-03-17T17:58:00.443805903Z" level=info msg="CreateContainer within sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\"" Mar 17 17:58:00.444681 containerd[1500]: time="2025-03-17T17:58:00.444602301Z" level=info msg="StartContainer for \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\"" Mar 17 17:58:00.497778 systemd[1]: Started cri-containerd-481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c.scope - libcontainer container 481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c. Mar 17 17:58:00.547985 containerd[1500]: time="2025-03-17T17:58:00.547917482Z" level=info msg="StartContainer for \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\" returns successfully" Mar 17 17:58:00.562757 containerd[1500]: time="2025-03-17T17:58:00.562707033Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:00.566228 containerd[1500]: time="2025-03-17T17:58:00.566173761Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 17 17:58:00.569105 containerd[1500]: time="2025-03-17T17:58:00.567103833Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:00.576142 containerd[1500]: time="2025-03-17T17:58:00.573829535Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.029213329s" Mar 17 17:58:00.576142 containerd[1500]: time="2025-03-17T17:58:00.576135745Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 17:58:00.593074 containerd[1500]: time="2025-03-17T17:58:00.592569630Z" level=info msg="CreateContainer within sandbox \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:58:00.626441 containerd[1500]: time="2025-03-17T17:58:00.625158843Z" level=info msg="CreateContainer within sandbox \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\"" Mar 17 17:58:00.628427 containerd[1500]: time="2025-03-17T17:58:00.628063718Z" level=info msg="StartContainer for \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\"" Mar 17 17:58:00.696789 systemd[1]: Started cri-containerd-ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe.scope - libcontainer container ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe. Mar 17 17:58:00.786731 containerd[1500]: time="2025-03-17T17:58:00.786460165Z" level=info msg="StartContainer for \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\" returns successfully" Mar 17 17:58:00.850795 kubelet[2997]: I0317 17:58:00.850546 2997 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:58:00.878547 kubelet[2997]: I0317 17:58:00.878088 2997 topology_manager.go:215] "Topology Admit Handler" podUID="143c8ce7-e225-4a0e-b6c6-d52386ec0af4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2lwsv" Mar 17 17:58:00.883196 kubelet[2997]: I0317 17:58:00.883144 2997 topology_manager.go:215] "Topology Admit Handler" podUID="0072d1b2-dfec-4b8f-9603-028004f6025d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kb9pf" Mar 17 17:58:00.887358 systemd[1]: Created slice kubepods-burstable-pod143c8ce7_e225_4a0e_b6c6_d52386ec0af4.slice - libcontainer container kubepods-burstable-pod143c8ce7_e225_4a0e_b6c6_d52386ec0af4.slice. Mar 17 17:58:00.889595 kubelet[2997]: W0317 17:58:00.889566 2997 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4152-2-2-5-05efd5484b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-2-5-05efd5484b' and this object Mar 17 17:58:00.889667 kubelet[2997]: E0317 17:58:00.889622 2997 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4152-2-2-5-05efd5484b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-2-5-05efd5484b' and this object Mar 17 17:58:00.899660 systemd[1]: Created slice kubepods-burstable-pod0072d1b2_dfec_4b8f_9603_028004f6025d.slice - libcontainer container kubepods-burstable-pod0072d1b2_dfec_4b8f_9603_028004f6025d.slice. Mar 17 17:58:00.987620 kubelet[2997]: I0317 17:58:00.987291 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0072d1b2-dfec-4b8f-9603-028004f6025d-config-volume\") pod \"coredns-7db6d8ff4d-kb9pf\" (UID: \"0072d1b2-dfec-4b8f-9603-028004f6025d\") " pod="kube-system/coredns-7db6d8ff4d-kb9pf" Mar 17 17:58:00.987620 kubelet[2997]: I0317 17:58:00.987331 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/143c8ce7-e225-4a0e-b6c6-d52386ec0af4-config-volume\") pod \"coredns-7db6d8ff4d-2lwsv\" (UID: \"143c8ce7-e225-4a0e-b6c6-d52386ec0af4\") " pod="kube-system/coredns-7db6d8ff4d-2lwsv" Mar 17 17:58:00.987620 kubelet[2997]: I0317 17:58:00.987349 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khrp\" (UniqueName: \"kubernetes.io/projected/143c8ce7-e225-4a0e-b6c6-d52386ec0af4-kube-api-access-5khrp\") pod \"coredns-7db6d8ff4d-2lwsv\" (UID: \"143c8ce7-e225-4a0e-b6c6-d52386ec0af4\") " pod="kube-system/coredns-7db6d8ff4d-2lwsv" Mar 17 17:58:00.987620 kubelet[2997]: I0317 17:58:00.987376 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pj9r\" (UniqueName: \"kubernetes.io/projected/0072d1b2-dfec-4b8f-9603-028004f6025d-kube-api-access-5pj9r\") pod \"coredns-7db6d8ff4d-kb9pf\" (UID: \"0072d1b2-dfec-4b8f-9603-028004f6025d\") " pod="kube-system/coredns-7db6d8ff4d-kb9pf" Mar 17 17:58:01.564934 kubelet[2997]: I0317 17:58:01.564859 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z86mm" podStartSLOduration=6.260866136 podStartE2EDuration="13.564825099s" podCreationTimestamp="2025-03-17 17:57:48 +0000 UTC" firstStartedPulling="2025-03-17 17:57:49.240434663 +0000 UTC m=+16.030524819" lastFinishedPulling="2025-03-17 17:57:56.544393625 +0000 UTC m=+23.334483782" observedRunningTime="2025-03-17 17:58:01.563434204 +0000 UTC m=+28.353524361" watchObservedRunningTime="2025-03-17 17:58:01.564825099 +0000 UTC m=+28.354915255" Mar 17 17:58:01.998449 sshd[3586]: Invalid user ubuntu from 220.81.148.101 port 34948 Mar 17 17:58:02.097935 containerd[1500]: time="2025-03-17T17:58:02.097808909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lwsv,Uid:143c8ce7-e225-4a0e-b6c6-d52386ec0af4,Namespace:kube-system,Attempt:0,}" Mar 17 17:58:02.104845 containerd[1500]: time="2025-03-17T17:58:02.104471138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kb9pf,Uid:0072d1b2-dfec-4b8f-9603-028004f6025d,Namespace:kube-system,Attempt:0,}" Mar 17 17:58:03.516827 sshd[3586]: maximum authentication attempts exceeded for invalid user ubuntu from 220.81.148.101 port 34948 ssh2 [preauth] Mar 17 17:58:03.517851 sshd[3586]: Disconnecting invalid user ubuntu 220.81.148.101 port 34948: Too many authentication failures [preauth] Mar 17 17:58:03.519906 systemd[1]: sshd@35-37.27.0.76:22-220.81.148.101:34948.service: Deactivated successfully. Mar 17 17:58:04.150925 systemd[1]: Started sshd@36-37.27.0.76:22-220.81.148.101:35732.service - OpenSSH per-connection server daemon (220.81.148.101:35732). Mar 17 17:58:05.313671 systemd-networkd[1403]: cilium_host: Link UP Mar 17 17:58:05.313828 systemd-networkd[1403]: cilium_net: Link UP Mar 17 17:58:05.313832 systemd-networkd[1403]: cilium_net: Gained carrier Mar 17 17:58:05.314033 systemd-networkd[1403]: cilium_host: Gained carrier Mar 17 17:58:05.439455 systemd-networkd[1403]: cilium_vxlan: Link UP Mar 17 17:58:05.439468 systemd-networkd[1403]: cilium_vxlan: Gained carrier Mar 17 17:58:05.667692 systemd-networkd[1403]: cilium_net: Gained IPv6LL Mar 17 17:58:05.841464 kernel: NET: Registered PF_ALG protocol family Mar 17 17:58:05.875642 systemd-networkd[1403]: cilium_host: Gained IPv6LL Mar 17 17:58:06.544666 systemd-networkd[1403]: lxc_health: Link UP Mar 17 17:58:06.557091 systemd-networkd[1403]: lxc_health: Gained carrier Mar 17 17:58:06.687362 kernel: eth0: renamed from tmp9f614 Mar 17 17:58:06.692897 systemd-networkd[1403]: lxcd79d477a427d: Link UP Mar 17 17:58:06.696206 systemd-networkd[1403]: lxcd79d477a427d: Gained carrier Mar 17 17:58:06.700962 systemd-networkd[1403]: lxc0fff6734da4a: Link UP Mar 17 17:58:06.703477 kernel: eth0: renamed from tmpe7eb8 Mar 17 17:58:06.706081 systemd-networkd[1403]: lxc0fff6734da4a: Gained carrier Mar 17 17:58:06.835594 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL Mar 17 17:58:06.938117 sshd[3840]: Invalid user ubuntu from 220.81.148.101 port 35732 Mar 17 17:58:07.158243 kubelet[2997]: I0317 17:58:07.157101 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-l8v5f" podStartSLOduration=7.048175694 podStartE2EDuration="18.157084648s" podCreationTimestamp="2025-03-17 17:57:49 +0000 UTC" firstStartedPulling="2025-03-17 17:57:49.467945071 +0000 UTC m=+16.258035229" lastFinishedPulling="2025-03-17 17:58:00.576854026 +0000 UTC m=+27.366944183" observedRunningTime="2025-03-17 17:58:01.596100823 +0000 UTC m=+28.386190979" watchObservedRunningTime="2025-03-17 17:58:07.157084648 +0000 UTC m=+33.947174806" Mar 17 17:58:07.923608 systemd-networkd[1403]: lxcd79d477a427d: Gained IPv6LL Mar 17 17:58:08.052507 systemd-networkd[1403]: lxc0fff6734da4a: Gained IPv6LL Mar 17 17:58:08.436242 systemd-networkd[1403]: lxc_health: Gained IPv6LL Mar 17 17:58:08.526104 sshd[3840]: maximum authentication attempts exceeded for invalid user ubuntu from 220.81.148.101 port 35732 ssh2 [preauth] Mar 17 17:58:08.526104 sshd[3840]: Disconnecting invalid user ubuntu 220.81.148.101 port 35732: Too many authentication failures [preauth] Mar 17 17:58:08.529810 systemd[1]: sshd@36-37.27.0.76:22-220.81.148.101:35732.service: Deactivated successfully. Mar 17 17:58:09.134752 systemd[1]: Started sshd@37-37.27.0.76:22-220.81.148.101:36524.service - OpenSSH per-connection server daemon (220.81.148.101:36524). Mar 17 17:58:10.348178 containerd[1500]: time="2025-03-17T17:58:10.347671376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:58:10.348178 containerd[1500]: time="2025-03-17T17:58:10.347734125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:58:10.348178 containerd[1500]: time="2025-03-17T17:58:10.347757069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:10.348178 containerd[1500]: time="2025-03-17T17:58:10.347951827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:10.366998 containerd[1500]: time="2025-03-17T17:58:10.365949988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:58:10.367285 containerd[1500]: time="2025-03-17T17:58:10.367143517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:58:10.367285 containerd[1500]: time="2025-03-17T17:58:10.367165387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:10.367792 containerd[1500]: time="2025-03-17T17:58:10.367445788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:58:10.408893 systemd[1]: Started cri-containerd-9f61472677c8f31e112dcd167391ed3e8a036fe6011cb83013b60433aa8fcc9e.scope - libcontainer container 9f61472677c8f31e112dcd167391ed3e8a036fe6011cb83013b60433aa8fcc9e. Mar 17 17:58:10.434565 systemd[1]: Started cri-containerd-e7eb8ce8763358d5065ea0d983e71ab3e48b111f472ea754a0e15d8c80f8030d.scope - libcontainer container e7eb8ce8763358d5065ea0d983e71ab3e48b111f472ea754a0e15d8c80f8030d. Mar 17 17:58:10.512811 containerd[1500]: time="2025-03-17T17:58:10.512203333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lwsv,Uid:143c8ce7-e225-4a0e-b6c6-d52386ec0af4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7eb8ce8763358d5065ea0d983e71ab3e48b111f472ea754a0e15d8c80f8030d\"" Mar 17 17:58:10.519474 containerd[1500]: time="2025-03-17T17:58:10.519297862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kb9pf,Uid:0072d1b2-dfec-4b8f-9603-028004f6025d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f61472677c8f31e112dcd167391ed3e8a036fe6011cb83013b60433aa8fcc9e\"" Mar 17 17:58:10.520371 containerd[1500]: time="2025-03-17T17:58:10.519861618Z" level=info msg="CreateContainer within sandbox \"e7eb8ce8763358d5065ea0d983e71ab3e48b111f472ea754a0e15d8c80f8030d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:58:10.527522 containerd[1500]: time="2025-03-17T17:58:10.527229634Z" level=info msg="CreateContainer within sandbox \"9f61472677c8f31e112dcd167391ed3e8a036fe6011cb83013b60433aa8fcc9e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:58:10.556251 containerd[1500]: time="2025-03-17T17:58:10.556200620Z" level=info msg="CreateContainer within sandbox \"9f61472677c8f31e112dcd167391ed3e8a036fe6011cb83013b60433aa8fcc9e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57e6a62966a323c5c367978274eb1ec0ceab8c8207b130dbdf34b196b6854eb9\"" Mar 17 17:58:10.557821 containerd[1500]: time="2025-03-17T17:58:10.557792892Z" level=info msg="StartContainer for \"57e6a62966a323c5c367978274eb1ec0ceab8c8207b130dbdf34b196b6854eb9\"" Mar 17 17:58:10.560947 containerd[1500]: time="2025-03-17T17:58:10.560851206Z" level=info msg="CreateContainer within sandbox \"e7eb8ce8763358d5065ea0d983e71ab3e48b111f472ea754a0e15d8c80f8030d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e68d0ac3e2194b9e8badac379c0ca4b903cf7d786ec9b02155cccd4b5ca8ad2e\"" Mar 17 17:58:10.562594 containerd[1500]: time="2025-03-17T17:58:10.562304554Z" level=info msg="StartContainer for \"e68d0ac3e2194b9e8badac379c0ca4b903cf7d786ec9b02155cccd4b5ca8ad2e\"" Mar 17 17:58:10.597559 systemd[1]: Started cri-containerd-57e6a62966a323c5c367978274eb1ec0ceab8c8207b130dbdf34b196b6854eb9.scope - libcontainer container 57e6a62966a323c5c367978274eb1ec0ceab8c8207b130dbdf34b196b6854eb9. Mar 17 17:58:10.600702 systemd[1]: Started cri-containerd-e68d0ac3e2194b9e8badac379c0ca4b903cf7d786ec9b02155cccd4b5ca8ad2e.scope - libcontainer container e68d0ac3e2194b9e8badac379c0ca4b903cf7d786ec9b02155cccd4b5ca8ad2e. Mar 17 17:58:10.643556 containerd[1500]: time="2025-03-17T17:58:10.643405609Z" level=info msg="StartContainer for \"57e6a62966a323c5c367978274eb1ec0ceab8c8207b130dbdf34b196b6854eb9\" returns successfully" Mar 17 17:58:10.646353 containerd[1500]: time="2025-03-17T17:58:10.646307468Z" level=info msg="StartContainer for \"e68d0ac3e2194b9e8badac379c0ca4b903cf7d786ec9b02155cccd4b5ca8ad2e\" returns successfully" Mar 17 17:58:11.357908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1044033242.mount: Deactivated successfully. Mar 17 17:58:11.475442 kubelet[2997]: I0317 17:58:11.474070 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kb9pf" podStartSLOduration=22.474048255 podStartE2EDuration="22.474048255s" podCreationTimestamp="2025-03-17 17:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:58:11.459909055 +0000 UTC m=+38.249999212" watchObservedRunningTime="2025-03-17 17:58:11.474048255 +0000 UTC m=+38.264138422" Mar 17 17:58:11.494026 kubelet[2997]: I0317 17:58:11.493744 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2lwsv" podStartSLOduration=22.493722421 podStartE2EDuration="22.493722421s" podCreationTimestamp="2025-03-17 17:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:58:11.476485934 +0000 UTC m=+38.266576092" watchObservedRunningTime="2025-03-17 17:58:11.493722421 +0000 UTC m=+38.283812578" Mar 17 17:58:12.703047 sshd[4225]: Invalid user ubuntu from 220.81.148.101 port 36524 Mar 17 17:58:13.917174 sshd[4225]: Received disconnect from 220.81.148.101 port 36524:11: disconnected by user [preauth] Mar 17 17:58:13.917174 sshd[4225]: Disconnected from invalid user ubuntu 220.81.148.101 port 36524 [preauth] Mar 17 17:58:13.920222 systemd[1]: sshd@37-37.27.0.76:22-220.81.148.101:36524.service: Deactivated successfully. Mar 17 17:58:14.225706 systemd[1]: Started sshd@38-37.27.0.76:22-220.81.148.101:37218.service - OpenSSH per-connection server daemon (220.81.148.101:37218). Mar 17 17:58:16.835677 kubelet[2997]: I0317 17:58:16.828214 2997 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:58:17.335204 sshd[4403]: Invalid user pi from 220.81.148.101 port 37218 Mar 17 17:58:18.548909 sshd[4403]: Received disconnect from 220.81.148.101 port 37218:11: disconnected by user [preauth] Mar 17 17:58:18.548909 sshd[4403]: Disconnected from invalid user pi 220.81.148.101 port 37218 [preauth] Mar 17 17:58:18.551812 systemd[1]: sshd@38-37.27.0.76:22-220.81.148.101:37218.service: Deactivated successfully. Mar 17 17:58:18.891674 systemd[1]: Started sshd@39-37.27.0.76:22-220.81.148.101:37908.service - OpenSSH per-connection server daemon (220.81.148.101:37908). Mar 17 17:58:22.249626 sshd[4408]: Invalid user baikal from 220.81.148.101 port 37908 Mar 17 17:58:22.570026 sshd[4408]: Received disconnect from 220.81.148.101 port 37908:11: disconnected by user [preauth] Mar 17 17:58:22.570026 sshd[4408]: Disconnected from invalid user baikal 220.81.148.101 port 37908 [preauth] Mar 17 17:58:22.572948 systemd[1]: sshd@39-37.27.0.76:22-220.81.148.101:37908.service: Deactivated successfully. Mar 17 18:00:46.056442 update_engine[1490]: I20250317 18:00:46.056341 1490 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 18:00:46.056442 update_engine[1490]: I20250317 18:00:46.056437 1490 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 18:00:46.059105 update_engine[1490]: I20250317 18:00:46.059064 1490 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 18:00:46.059803 update_engine[1490]: I20250317 18:00:46.059777 1490 omaha_request_params.cc:62] Current group set to stable Mar 17 18:00:46.060036 update_engine[1490]: I20250317 18:00:46.059898 1490 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 18:00:46.060036 update_engine[1490]: I20250317 18:00:46.059911 1490 update_attempter.cc:643] Scheduling an action processor start. Mar 17 18:00:46.060036 update_engine[1490]: I20250317 18:00:46.059929 1490 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:00:46.060036 update_engine[1490]: I20250317 18:00:46.059967 1490 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 18:00:46.060036 update_engine[1490]: I20250317 18:00:46.060029 1490 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 18:00:46.060151 update_engine[1490]: I20250317 18:00:46.060037 1490 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Mar 17 18:00:46.060151 update_engine[1490]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Mar 17 18:00:46.060151 update_engine[1490]: <os version="Chateau" platform="CoreOS" sp="4152.2.2_x86_64"></os> Mar 17 18:00:46.060151 update_engine[1490]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4152.2.2" track="stable" bootid="{c856d409-a253-48c6-8824-fa7c723044c5}" oem="hetzner" oemversion="0" alephversion="4152.2.2" machineid="3db38ba185c7420cb84d031aa6e92fa0" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Mar 17 18:00:46.060151 update_engine[1490]: <ping active="1"></ping> Mar 17 18:00:46.060151 update_engine[1490]: <updatecheck></updatecheck> Mar 17 18:00:46.060151 update_engine[1490]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Mar 17 18:00:46.060151 update_engine[1490]: </app> Mar 17 18:00:46.060151 update_engine[1490]: </request> Mar 17 18:00:46.060151 update_engine[1490]: I20250317 18:00:46.060045 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:00:46.068377 update_engine[1490]: I20250317 18:00:46.068307 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:00:46.069087 update_engine[1490]: I20250317 18:00:46.068732 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:00:46.070101 update_engine[1490]: E20250317 18:00:46.069312 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:00:46.070101 update_engine[1490]: I20250317 18:00:46.069372 1490 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 18:00:46.074154 locksmithd[1520]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 18:00:55.935910 update_engine[1490]: I20250317 18:00:55.935817 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:00:55.936367 update_engine[1490]: I20250317 18:00:55.936081 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:00:55.936367 update_engine[1490]: I20250317 18:00:55.936326 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:00:55.936727 update_engine[1490]: E20250317 18:00:55.936693 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:00:55.936769 update_engine[1490]: I20250317 18:00:55.936731 1490 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 18:01:05.943603 update_engine[1490]: I20250317 18:01:05.943401 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:01:05.943977 update_engine[1490]: I20250317 18:01:05.943889 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:01:05.944206 update_engine[1490]: I20250317 18:01:05.944162 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:01:05.944523 update_engine[1490]: E20250317 18:01:05.944491 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:01:05.944559 update_engine[1490]: I20250317 18:01:05.944545 1490 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 18:01:15.941264 update_engine[1490]: I20250317 18:01:15.941169 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:01:15.941778 update_engine[1490]: I20250317 18:01:15.941473 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:01:15.941778 update_engine[1490]: I20250317 18:01:15.941722 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:01:15.942037 update_engine[1490]: E20250317 18:01:15.942000 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:01:15.942076 update_engine[1490]: I20250317 18:01:15.942042 1490 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:01:15.942076 update_engine[1490]: I20250317 18:01:15.942054 1490 omaha_request_action.cc:617] Omaha request response: Mar 17 18:01:15.942175 update_engine[1490]: E20250317 18:01:15.942139 1490 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 17 18:01:15.942175 update_engine[1490]: I20250317 18:01:15.942164 1490 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 18:01:15.942175 update_engine[1490]: I20250317 18:01:15.942171 1490 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:01:15.942175 update_engine[1490]: I20250317 18:01:15.942178 1490 update_attempter.cc:306] Processing Done. Mar 17 18:01:15.942482 update_engine[1490]: E20250317 18:01:15.942193 1490 update_attempter.cc:619] Update failed. Mar 17 18:01:15.942482 update_engine[1490]: I20250317 18:01:15.942200 1490 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 18:01:15.942482 update_engine[1490]: I20250317 18:01:15.942207 1490 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 18:01:15.942482 update_engine[1490]: I20250317 18:01:15.942214 1490 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 18:01:15.942482 update_engine[1490]: I20250317 18:01:15.942282 1490 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:01:15.942482 update_engine[1490]: I20250317 18:01:15.942299 1490 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 18:01:15.942482 update_engine[1490]: I20250317 18:01:15.942306 1490 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Mar 17 18:01:15.942482 update_engine[1490]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Mar 17 18:01:15.942482 update_engine[1490]: <os version="Chateau" platform="CoreOS" sp="4152.2.2_x86_64"></os> Mar 17 18:01:15.942482 update_engine[1490]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4152.2.2" track="stable" bootid="{c856d409-a253-48c6-8824-fa7c723044c5}" oem="hetzner" oemversion="0" alephversion="4152.2.2" machineid="3db38ba185c7420cb84d031aa6e92fa0" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Mar 17 18:01:15.942482 update_engine[1490]: <event eventtype="3" eventresult="0" errorcode="268437456"></event> Mar 17 18:01:15.942482 update_engine[1490]: </app> Mar 17 18:01:15.942482 update_engine[1490]: </request> Mar 17 18:01:15.942482 update_engine[1490]: I20250317 18:01:15.942314 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:01:15.942482 update_engine[1490]: I20250317 18:01:15.942481 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:01:15.943332 update_engine[1490]: I20250317 18:01:15.942620 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:01:15.943332 update_engine[1490]: E20250317 18:01:15.943032 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:01:15.943332 update_engine[1490]: I20250317 18:01:15.943112 1490 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:01:15.943332 update_engine[1490]: I20250317 18:01:15.943122 1490 omaha_request_action.cc:617] Omaha request response: Mar 17 18:01:15.943332 update_engine[1490]: I20250317 18:01:15.943132 1490 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:01:15.943332 update_engine[1490]: I20250317 18:01:15.943140 1490 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:01:15.943332 update_engine[1490]: I20250317 18:01:15.943147 1490 update_attempter.cc:306] Processing Done. Mar 17 18:01:15.943332 update_engine[1490]: I20250317 18:01:15.943156 1490 update_attempter.cc:310] Error event sent. Mar 17 18:01:15.943332 update_engine[1490]: I20250317 18:01:15.943171 1490 update_check_scheduler.cc:74] Next update check in 42m53s Mar 17 18:01:15.943580 locksmithd[1520]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 18:01:15.943580 locksmithd[1520]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 18:02:21.710866 systemd[1]: Started sshd@40-37.27.0.76:22-139.178.68.195:37476.service - OpenSSH per-connection server daemon (139.178.68.195:37476). Mar 17 18:02:22.694860 sshd[4448]: Accepted publickey for core from 139.178.68.195 port 37476 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:22.696895 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:22.701459 systemd-logind[1487]: New session 8 of user core. Mar 17 18:02:22.710520 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 18:02:23.791457 sshd[4450]: Connection closed by 139.178.68.195 port 37476 Mar 17 18:02:23.792226 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:23.795967 systemd[1]: sshd@40-37.27.0.76:22-139.178.68.195:37476.service: Deactivated successfully. Mar 17 18:02:23.798166 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:02:23.798803 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:02:23.799720 systemd-logind[1487]: Removed session 8. Mar 17 18:02:28.962716 systemd[1]: Started sshd@41-37.27.0.76:22-139.178.68.195:44344.service - OpenSSH per-connection server daemon (139.178.68.195:44344). Mar 17 18:02:29.934728 sshd[4463]: Accepted publickey for core from 139.178.68.195 port 44344 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:29.936272 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:29.940687 systemd-logind[1487]: New session 9 of user core. Mar 17 18:02:29.942564 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 18:02:30.694000 sshd[4465]: Connection closed by 139.178.68.195 port 44344 Mar 17 18:02:30.694749 sshd-session[4463]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:30.698922 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:02:30.699334 systemd[1]: sshd@41-37.27.0.76:22-139.178.68.195:44344.service: Deactivated successfully. Mar 17 18:02:30.702237 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:02:30.703762 systemd-logind[1487]: Removed session 9. Mar 17 18:02:35.860378 systemd[1]: Started sshd@42-37.27.0.76:22-139.178.68.195:50278.service - OpenSSH per-connection server daemon (139.178.68.195:50278). Mar 17 18:02:36.833050 sshd[4478]: Accepted publickey for core from 139.178.68.195 port 50278 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:36.835664 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:36.842478 systemd-logind[1487]: New session 10 of user core. Mar 17 18:02:36.848588 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 18:02:37.564620 sshd[4480]: Connection closed by 139.178.68.195 port 50278 Mar 17 18:02:37.565321 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:37.568104 systemd[1]: sshd@42-37.27.0.76:22-139.178.68.195:50278.service: Deactivated successfully. Mar 17 18:02:37.570079 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:02:37.572103 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:02:37.573131 systemd-logind[1487]: Removed session 10. Mar 17 18:02:37.737352 systemd[1]: Started sshd@43-37.27.0.76:22-139.178.68.195:50282.service - OpenSSH per-connection server daemon (139.178.68.195:50282). Mar 17 18:02:38.712707 sshd[4491]: Accepted publickey for core from 139.178.68.195 port 50282 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:38.714236 sshd-session[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:38.718536 systemd-logind[1487]: New session 11 of user core. Mar 17 18:02:38.723537 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 18:02:39.490512 sshd[4493]: Connection closed by 139.178.68.195 port 50282 Mar 17 18:02:39.491200 sshd-session[4491]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:39.495428 systemd[1]: sshd@43-37.27.0.76:22-139.178.68.195:50282.service: Deactivated successfully. Mar 17 18:02:39.497752 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:02:39.499088 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:02:39.500265 systemd-logind[1487]: Removed session 11. Mar 17 18:02:39.661962 systemd[1]: Started sshd@44-37.27.0.76:22-139.178.68.195:50298.service - OpenSSH per-connection server daemon (139.178.68.195:50298). Mar 17 18:02:40.638679 sshd[4502]: Accepted publickey for core from 139.178.68.195 port 50298 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:40.640321 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:40.644744 systemd-logind[1487]: New session 12 of user core. Mar 17 18:02:40.649530 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 18:02:41.394306 sshd[4504]: Connection closed by 139.178.68.195 port 50298 Mar 17 18:02:41.395100 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:41.399719 systemd[1]: sshd@44-37.27.0.76:22-139.178.68.195:50298.service: Deactivated successfully. Mar 17 18:02:41.402792 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:02:41.403486 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:02:41.404459 systemd-logind[1487]: Removed session 12. Mar 17 18:02:46.565747 systemd[1]: Started sshd@45-37.27.0.76:22-139.178.68.195:38460.service - OpenSSH per-connection server daemon (139.178.68.195:38460). Mar 17 18:02:47.537234 sshd[4515]: Accepted publickey for core from 139.178.68.195 port 38460 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:47.538992 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:47.543872 systemd-logind[1487]: New session 13 of user core. Mar 17 18:02:47.550566 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 18:02:48.287867 sshd[4517]: Connection closed by 139.178.68.195 port 38460 Mar 17 18:02:48.288658 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:48.292451 systemd[1]: sshd@45-37.27.0.76:22-139.178.68.195:38460.service: Deactivated successfully. Mar 17 18:02:48.294503 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:02:48.296177 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:02:48.297359 systemd-logind[1487]: Removed session 13. Mar 17 18:02:48.454781 systemd[1]: Started sshd@46-37.27.0.76:22-139.178.68.195:38476.service - OpenSSH per-connection server daemon (139.178.68.195:38476). Mar 17 18:02:49.431003 sshd[4528]: Accepted publickey for core from 139.178.68.195 port 38476 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:49.432773 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:49.437737 systemd-logind[1487]: New session 14 of user core. Mar 17 18:02:49.444564 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 18:02:50.318673 sshd[4530]: Connection closed by 139.178.68.195 port 38476 Mar 17 18:02:50.320591 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:50.326840 systemd[1]: sshd@46-37.27.0.76:22-139.178.68.195:38476.service: Deactivated successfully. Mar 17 18:02:50.329487 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:02:50.330672 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:02:50.331870 systemd-logind[1487]: Removed session 14. Mar 17 18:02:50.491960 systemd[1]: Started sshd@47-37.27.0.76:22-139.178.68.195:38486.service - OpenSSH per-connection server daemon (139.178.68.195:38486). Mar 17 18:02:51.478137 sshd[4541]: Accepted publickey for core from 139.178.68.195 port 38486 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:51.479827 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:51.484759 systemd-logind[1487]: New session 15 of user core. Mar 17 18:02:51.488568 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 18:02:53.928705 sshd[4543]: Connection closed by 139.178.68.195 port 38486 Mar 17 18:02:53.931352 sshd-session[4541]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:53.940926 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:02:53.942160 systemd[1]: sshd@47-37.27.0.76:22-139.178.68.195:38486.service: Deactivated successfully. Mar 17 18:02:53.944762 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:02:53.945606 systemd-logind[1487]: Removed session 15. Mar 17 18:02:54.100950 systemd[1]: Started sshd@48-37.27.0.76:22-139.178.68.195:38494.service - OpenSSH per-connection server daemon (139.178.68.195:38494). Mar 17 18:02:55.071407 sshd[4562]: Accepted publickey for core from 139.178.68.195 port 38494 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:55.073183 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:55.078084 systemd-logind[1487]: New session 16 of user core. Mar 17 18:02:55.082534 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 18:02:55.934112 sshd[4564]: Connection closed by 139.178.68.195 port 38494 Mar 17 18:02:55.934858 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:55.939592 systemd[1]: sshd@48-37.27.0.76:22-139.178.68.195:38494.service: Deactivated successfully. Mar 17 18:02:55.942188 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:02:55.943792 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:02:55.945620 systemd-logind[1487]: Removed session 16. Mar 17 18:02:56.114656 systemd[1]: Started sshd@49-37.27.0.76:22-139.178.68.195:52364.service - OpenSSH per-connection server daemon (139.178.68.195:52364). Mar 17 18:02:57.100591 sshd[4573]: Accepted publickey for core from 139.178.68.195 port 52364 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:02:57.102212 sshd-session[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:02:57.106403 systemd-logind[1487]: New session 17 of user core. Mar 17 18:02:57.110560 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 18:02:57.841531 sshd[4575]: Connection closed by 139.178.68.195 port 52364 Mar 17 18:02:57.842158 sshd-session[4573]: pam_unix(sshd:session): session closed for user core Mar 17 18:02:57.846129 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:02:57.846436 systemd[1]: sshd@49-37.27.0.76:22-139.178.68.195:52364.service: Deactivated successfully. Mar 17 18:02:57.848997 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:02:57.849877 systemd-logind[1487]: Removed session 17. Mar 17 18:03:03.005469 systemd[1]: Started sshd@50-37.27.0.76:22-139.178.68.195:52378.service - OpenSSH per-connection server daemon (139.178.68.195:52378). Mar 17 18:03:03.978893 sshd[4589]: Accepted publickey for core from 139.178.68.195 port 52378 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:03:03.980605 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:03:03.987380 systemd-logind[1487]: New session 18 of user core. Mar 17 18:03:03.993550 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 18:03:04.713879 sshd[4591]: Connection closed by 139.178.68.195 port 52378 Mar 17 18:03:04.714508 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Mar 17 18:03:04.718127 systemd[1]: sshd@50-37.27.0.76:22-139.178.68.195:52378.service: Deactivated successfully. Mar 17 18:03:04.720308 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:03:04.721104 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:03:04.722070 systemd-logind[1487]: Removed session 18. Mar 17 18:03:09.881484 systemd[1]: Started sshd@51-37.27.0.76:22-139.178.68.195:41966.service - OpenSSH per-connection server daemon (139.178.68.195:41966). Mar 17 18:03:10.855930 sshd[4603]: Accepted publickey for core from 139.178.68.195 port 41966 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:03:10.857581 sshd-session[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:03:10.862197 systemd-logind[1487]: New session 19 of user core. Mar 17 18:03:10.867620 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 18:03:11.585504 sshd[4605]: Connection closed by 139.178.68.195 port 41966 Mar 17 18:03:11.586203 sshd-session[4603]: pam_unix(sshd:session): session closed for user core Mar 17 18:03:11.590259 systemd[1]: sshd@51-37.27.0.76:22-139.178.68.195:41966.service: Deactivated successfully. Mar 17 18:03:11.592836 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:03:11.595243 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:03:11.596647 systemd-logind[1487]: Removed session 19. Mar 17 18:03:11.757680 systemd[1]: Started sshd@52-37.27.0.76:22-139.178.68.195:41970.service - OpenSSH per-connection server daemon (139.178.68.195:41970). Mar 17 18:03:12.724974 sshd[4616]: Accepted publickey for core from 139.178.68.195 port 41970 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:03:12.726616 sshd-session[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:03:12.731023 systemd-logind[1487]: New session 20 of user core. Mar 17 18:03:12.737540 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 18:03:14.692284 containerd[1500]: time="2025-03-17T18:03:14.692151905Z" level=info msg="StopContainer for \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\" with timeout 30 (s)" Mar 17 18:03:14.693578 containerd[1500]: time="2025-03-17T18:03:14.692898007Z" level=info msg="Stop container \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\" with signal terminated" Mar 17 18:03:14.763534 systemd[1]: cri-containerd-ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe.scope: Deactivated successfully. Mar 17 18:03:14.783341 containerd[1500]: time="2025-03-17T18:03:14.782736164Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:03:14.786598 containerd[1500]: time="2025-03-17T18:03:14.786575643Z" level=info msg="StopContainer for \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\" with timeout 2 (s)" Mar 17 18:03:14.787197 containerd[1500]: time="2025-03-17T18:03:14.787177180Z" level=info msg="Stop container \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\" with signal terminated" Mar 17 18:03:14.794623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe-rootfs.mount: Deactivated successfully. Mar 17 18:03:14.798855 systemd-networkd[1403]: lxc_health: Link DOWN Mar 17 18:03:14.798864 systemd-networkd[1403]: lxc_health: Lost carrier Mar 17 18:03:14.815984 containerd[1500]: time="2025-03-17T18:03:14.815605961Z" level=info msg="shim disconnected" id=ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe namespace=k8s.io Mar 17 18:03:14.815984 containerd[1500]: time="2025-03-17T18:03:14.815665083Z" level=warning msg="cleaning up after shim disconnected" id=ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe namespace=k8s.io Mar 17 18:03:14.815984 containerd[1500]: time="2025-03-17T18:03:14.815674501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:14.825235 systemd[1]: cri-containerd-481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c.scope: Deactivated successfully. Mar 17 18:03:14.825519 systemd[1]: cri-containerd-481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c.scope: Consumed 7.847s CPU time. Mar 17 18:03:14.837269 containerd[1500]: time="2025-03-17T18:03:14.837109808Z" level=info msg="StopContainer for \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\" returns successfully" Mar 17 18:03:14.837824 containerd[1500]: time="2025-03-17T18:03:14.837741022Z" level=info msg="StopPodSandbox for \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\"" Mar 17 18:03:14.846569 containerd[1500]: time="2025-03-17T18:03:14.841386374Z" level=info msg="Container to stop \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:03:14.852970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c-rootfs.mount: Deactivated successfully. Mar 17 18:03:14.853405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4-shm.mount: Deactivated successfully. Mar 17 18:03:14.862059 containerd[1500]: time="2025-03-17T18:03:14.861967897Z" level=info msg="shim disconnected" id=481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c namespace=k8s.io Mar 17 18:03:14.862059 containerd[1500]: time="2025-03-17T18:03:14.862059009Z" level=warning msg="cleaning up after shim disconnected" id=481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c namespace=k8s.io Mar 17 18:03:14.862258 containerd[1500]: time="2025-03-17T18:03:14.862068357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:14.863926 systemd[1]: cri-containerd-fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4.scope: Deactivated successfully. Mar 17 18:03:14.880140 containerd[1500]: time="2025-03-17T18:03:14.880099409Z" level=info msg="StopContainer for \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\" returns successfully" Mar 17 18:03:14.880780 containerd[1500]: time="2025-03-17T18:03:14.880761410Z" level=info msg="StopPodSandbox for \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\"" Mar 17 18:03:14.880982 containerd[1500]: time="2025-03-17T18:03:14.880842503Z" level=info msg="Container to stop \"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:03:14.880982 containerd[1500]: time="2025-03-17T18:03:14.880878832Z" level=info msg="Container to stop \"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:03:14.880982 containerd[1500]: time="2025-03-17T18:03:14.880889042Z" level=info msg="Container to stop \"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:03:14.880982 containerd[1500]: time="2025-03-17T18:03:14.880897447Z" level=info msg="Container to stop \"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:03:14.880982 containerd[1500]: time="2025-03-17T18:03:14.880905513Z" level=info msg="Container to stop \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:03:14.883613 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d-shm.mount: Deactivated successfully. Mar 17 18:03:14.889673 systemd[1]: cri-containerd-1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d.scope: Deactivated successfully. Mar 17 18:03:14.900133 containerd[1500]: time="2025-03-17T18:03:14.899949409Z" level=info msg="shim disconnected" id=fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4 namespace=k8s.io Mar 17 18:03:14.900133 containerd[1500]: time="2025-03-17T18:03:14.900014873Z" level=warning msg="cleaning up after shim disconnected" id=fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4 namespace=k8s.io Mar 17 18:03:14.900133 containerd[1500]: time="2025-03-17T18:03:14.900025182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:14.917386 containerd[1500]: time="2025-03-17T18:03:14.917330342Z" level=info msg="shim disconnected" id=1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d namespace=k8s.io Mar 17 18:03:14.918490 containerd[1500]: time="2025-03-17T18:03:14.917613286Z" level=warning msg="cleaning up after shim disconnected" id=1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d namespace=k8s.io Mar 17 18:03:14.918490 containerd[1500]: time="2025-03-17T18:03:14.917627183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:14.918490 containerd[1500]: time="2025-03-17T18:03:14.917755125Z" level=info msg="TearDown network for sandbox \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\" successfully" Mar 17 18:03:14.918490 containerd[1500]: time="2025-03-17T18:03:14.917769842Z" level=info msg="StopPodSandbox for \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\" returns successfully" Mar 17 18:03:14.936246 containerd[1500]: time="2025-03-17T18:03:14.936089779Z" level=info msg="TearDown network for sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" successfully" Mar 17 18:03:14.936246 containerd[1500]: time="2025-03-17T18:03:14.936120528Z" level=info msg="StopPodSandbox for \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" returns successfully" Mar 17 18:03:14.992474 kubelet[2997]: I0317 18:03:14.991543 2997 scope.go:117] "RemoveContainer" containerID="481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c" Mar 17 18:03:14.997974 containerd[1500]: time="2025-03-17T18:03:14.997931134Z" level=info msg="RemoveContainer for \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\"" Mar 17 18:03:15.001723 containerd[1500]: time="2025-03-17T18:03:15.001682406Z" level=info msg="RemoveContainer for \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\" returns successfully" Mar 17 18:03:15.001958 kubelet[2997]: I0317 18:03:15.001868 2997 scope.go:117] "RemoveContainer" containerID="563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d" Mar 17 18:03:15.002667 containerd[1500]: time="2025-03-17T18:03:15.002639826Z" level=info msg="RemoveContainer for \"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d\"" Mar 17 18:03:15.007442 containerd[1500]: time="2025-03-17T18:03:15.005623468Z" level=info msg="RemoveContainer for \"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d\" returns successfully" Mar 17 18:03:15.007532 kubelet[2997]: I0317 18:03:15.007496 2997 scope.go:117] "RemoveContainer" containerID="6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f" Mar 17 18:03:15.009822 containerd[1500]: time="2025-03-17T18:03:15.009789985Z" level=info msg="RemoveContainer for \"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f\"" Mar 17 18:03:15.012584 containerd[1500]: time="2025-03-17T18:03:15.012552037Z" level=info msg="RemoveContainer for \"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f\" returns successfully" Mar 17 18:03:15.012738 kubelet[2997]: I0317 18:03:15.012691 2997 scope.go:117] "RemoveContainer" containerID="562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9" Mar 17 18:03:15.013514 containerd[1500]: time="2025-03-17T18:03:15.013483929Z" level=info msg="RemoveContainer for \"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9\"" Mar 17 18:03:15.016097 containerd[1500]: time="2025-03-17T18:03:15.016061412Z" level=info msg="RemoveContainer for \"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9\" returns successfully" Mar 17 18:03:15.016203 kubelet[2997]: I0317 18:03:15.016163 2997 scope.go:117] "RemoveContainer" containerID="29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9" Mar 17 18:03:15.016831 containerd[1500]: time="2025-03-17T18:03:15.016806511Z" level=info msg="RemoveContainer for \"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9\"" Mar 17 18:03:15.019467 containerd[1500]: time="2025-03-17T18:03:15.019441253Z" level=info msg="RemoveContainer for \"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9\" returns successfully" Mar 17 18:03:15.019596 kubelet[2997]: I0317 18:03:15.019566 2997 scope.go:117] "RemoveContainer" containerID="481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c" Mar 17 18:03:15.019967 containerd[1500]: time="2025-03-17T18:03:15.019746821Z" level=error msg="ContainerStatus for \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\": not found" Mar 17 18:03:15.026785 kubelet[2997]: E0317 18:03:15.026748 2997 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\": not found" containerID="481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c" Mar 17 18:03:15.026885 kubelet[2997]: I0317 18:03:15.026792 2997 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c"} err="failed to get container status \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"481f140c5068309ecc491886f556c5ac942af43613af0118cbc4a49daba94c3c\": not found" Mar 17 18:03:15.026885 kubelet[2997]: I0317 18:03:15.026869 2997 scope.go:117] "RemoveContainer" containerID="563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d" Mar 17 18:03:15.027065 containerd[1500]: time="2025-03-17T18:03:15.027016394Z" level=error msg="ContainerStatus for \"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d\": not found" Mar 17 18:03:15.027150 kubelet[2997]: E0317 18:03:15.027131 2997 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d\": not found" containerID="563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d" Mar 17 18:03:15.027182 kubelet[2997]: I0317 18:03:15.027150 2997 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d"} err="failed to get container status \"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"563bba9506a17a4e92188281058c30e9df150aba63548dbff5118522dcbb4b1d\": not found" Mar 17 18:03:15.027182 kubelet[2997]: I0317 18:03:15.027162 2997 scope.go:117] "RemoveContainer" containerID="6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f" Mar 17 18:03:15.027285 containerd[1500]: time="2025-03-17T18:03:15.027263412Z" level=error msg="ContainerStatus for \"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f\": not found" Mar 17 18:03:15.027388 kubelet[2997]: E0317 18:03:15.027367 2997 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f\": not found" containerID="6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f" Mar 17 18:03:15.027457 kubelet[2997]: I0317 18:03:15.027385 2997 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f"} err="failed to get container status \"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ae8f855aa9db658bd31ce8b384ed0dff0125df538a90279a3f938964e20d68f\": not found" Mar 17 18:03:15.027457 kubelet[2997]: I0317 18:03:15.027396 2997 scope.go:117] "RemoveContainer" containerID="562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9" Mar 17 18:03:15.027582 containerd[1500]: time="2025-03-17T18:03:15.027551807Z" level=error msg="ContainerStatus for \"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9\": not found" Mar 17 18:03:15.027643 kubelet[2997]: E0317 18:03:15.027626 2997 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9\": not found" containerID="562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9" Mar 17 18:03:15.027673 kubelet[2997]: I0317 18:03:15.027644 2997 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9"} err="failed to get container status \"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"562c37aab82f5fff23d644580e90ea5f464e2c02c8d1f2b03cc23bf9d20770c9\": not found" Mar 17 18:03:15.027673 kubelet[2997]: I0317 18:03:15.027655 2997 scope.go:117] "RemoveContainer" containerID="29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9" Mar 17 18:03:15.027819 containerd[1500]: time="2025-03-17T18:03:15.027787643Z" level=error msg="ContainerStatus for \"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9\": not found" Mar 17 18:03:15.027919 kubelet[2997]: E0317 18:03:15.027866 2997 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9\": not found" containerID="29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9" Mar 17 18:03:15.027919 kubelet[2997]: I0317 18:03:15.027883 2997 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9"} err="failed to get container status \"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9\": rpc error: code = NotFound desc = an error occurred when try to find container \"29dde0d3a578a8ee8edaf85fca666c62a721116cdc7475945b55b9db3e589de9\": not found" Mar 17 18:03:15.027919 kubelet[2997]: I0317 18:03:15.027894 2997 scope.go:117] "RemoveContainer" containerID="ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe" Mar 17 18:03:15.028679 containerd[1500]: time="2025-03-17T18:03:15.028656125Z" level=info msg="RemoveContainer for \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\"" Mar 17 18:03:15.038854 containerd[1500]: time="2025-03-17T18:03:15.038784475Z" level=info msg="RemoveContainer for \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\" returns successfully" Mar 17 18:03:15.039050 kubelet[2997]: I0317 18:03:15.038982 2997 scope.go:117] "RemoveContainer" containerID="ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe" Mar 17 18:03:15.039309 containerd[1500]: time="2025-03-17T18:03:15.039174923Z" level=error msg="ContainerStatus for \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\": not found" Mar 17 18:03:15.039403 kubelet[2997]: E0317 18:03:15.039273 2997 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\": not found" containerID="ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe" Mar 17 18:03:15.039501 kubelet[2997]: I0317 18:03:15.039456 2997 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe"} err="failed to get container status \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\": rpc error: code = NotFound desc = an error occurred when try to find container \"ddea3a8a6b3fdd21d96292e80c551dc70be80a53355badeeea3ba433abf6fbbe\": not found" Mar 17 18:03:15.044889 kubelet[2997]: I0317 18:03:15.044731 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-config-path\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.044889 kubelet[2997]: I0317 18:03:15.044758 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cni-path\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.044889 kubelet[2997]: I0317 18:03:15.044777 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b71e734-ba40-479d-8236-2c82f8e70f24-clustermesh-secrets\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.044889 kubelet[2997]: I0317 18:03:15.044789 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-host-proc-sys-net\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.044889 kubelet[2997]: I0317 18:03:15.044804 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f55707a4-7c0b-4f07-ab76-7a757044df26-cilium-config-path\") pod \"f55707a4-7c0b-4f07-ab76-7a757044df26\" (UID: \"f55707a4-7c0b-4f07-ab76-7a757044df26\") " Mar 17 18:03:15.044889 kubelet[2997]: I0317 18:03:15.044816 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-etc-cni-netd\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.045329 kubelet[2997]: I0317 18:03:15.044828 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-run\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.045329 kubelet[2997]: I0317 18:03:15.044843 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b71e734-ba40-479d-8236-2c82f8e70f24-hubble-tls\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.045329 kubelet[2997]: I0317 18:03:15.044856 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-cgroup\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.045329 kubelet[2997]: I0317 18:03:15.044870 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-xtables-lock\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.045329 kubelet[2997]: I0317 18:03:15.044883 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-hostproc\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.045329 kubelet[2997]: I0317 18:03:15.044903 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-lib-modules\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.045486 kubelet[2997]: I0317 18:03:15.044915 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-bpf-maps\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.045486 kubelet[2997]: I0317 18:03:15.044932 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mql8\" (UniqueName: \"kubernetes.io/projected/4b71e734-ba40-479d-8236-2c82f8e70f24-kube-api-access-7mql8\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.045486 kubelet[2997]: I0317 18:03:15.044946 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-host-proc-sys-kernel\") pod \"4b71e734-ba40-479d-8236-2c82f8e70f24\" (UID: \"4b71e734-ba40-479d-8236-2c82f8e70f24\") " Mar 17 18:03:15.045486 kubelet[2997]: I0317 18:03:15.044961 2997 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm4kp\" (UniqueName: \"kubernetes.io/projected/f55707a4-7c0b-4f07-ab76-7a757044df26-kube-api-access-rm4kp\") pod \"f55707a4-7c0b-4f07-ab76-7a757044df26\" (UID: \"f55707a4-7c0b-4f07-ab76-7a757044df26\") " Mar 17 18:03:15.053105 kubelet[2997]: I0317 18:03:15.050600 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:03:15.053105 kubelet[2997]: I0317 18:03:15.052580 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:03:15.053105 kubelet[2997]: I0317 18:03:15.052600 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-hostproc" (OuterVolumeSpecName: "hostproc") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:03:15.053105 kubelet[2997]: I0317 18:03:15.052631 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:03:15.053105 kubelet[2997]: I0317 18:03:15.052645 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:03:15.053614 kubelet[2997]: I0317 18:03:15.053337 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:03:15.055146 kubelet[2997]: I0317 18:03:15.054925 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cni-path" (OuterVolumeSpecName: "cni-path") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:03:15.057969 kubelet[2997]: I0317 18:03:15.057942 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b71e734-ba40-479d-8236-2c82f8e70f24-kube-api-access-7mql8" (OuterVolumeSpecName: "kube-api-access-7mql8") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "kube-api-access-7mql8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:03:15.059033 kubelet[2997]: I0317 18:03:15.059001 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:03:15.059282 kubelet[2997]: I0317 18:03:15.058999 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:03:15.059282 kubelet[2997]: I0317 18:03:15.059244 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:03:15.063944 kubelet[2997]: I0317 18:03:15.063921 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55707a4-7c0b-4f07-ab76-7a757044df26-kube-api-access-rm4kp" (OuterVolumeSpecName: "kube-api-access-rm4kp") pod "f55707a4-7c0b-4f07-ab76-7a757044df26" (UID: "f55707a4-7c0b-4f07-ab76-7a757044df26"). InnerVolumeSpecName "kube-api-access-rm4kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:03:15.064742 kubelet[2997]: I0317 18:03:15.064697 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b71e734-ba40-479d-8236-2c82f8e70f24-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:03:15.065272 kubelet[2997]: I0317 18:03:15.065250 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b71e734-ba40-479d-8236-2c82f8e70f24-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:03:15.065675 kubelet[2997]: I0317 18:03:15.065651 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4b71e734-ba40-479d-8236-2c82f8e70f24" (UID: "4b71e734-ba40-479d-8236-2c82f8e70f24"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:03:15.066044 kubelet[2997]: I0317 18:03:15.066023 2997 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f55707a4-7c0b-4f07-ab76-7a757044df26-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f55707a4-7c0b-4f07-ab76-7a757044df26" (UID: "f55707a4-7c0b-4f07-ab76-7a757044df26"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:03:15.147358 kubelet[2997]: I0317 18:03:15.147307 2997 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-config-path\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147358 kubelet[2997]: I0317 18:03:15.147347 2997 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cni-path\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147358 kubelet[2997]: I0317 18:03:15.147357 2997 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f55707a4-7c0b-4f07-ab76-7a757044df26-cilium-config-path\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147358 kubelet[2997]: I0317 18:03:15.147367 2997 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-etc-cni-netd\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147686 kubelet[2997]: I0317 18:03:15.147375 2997 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b71e734-ba40-479d-8236-2c82f8e70f24-clustermesh-secrets\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147686 kubelet[2997]: I0317 18:03:15.147383 2997 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-host-proc-sys-net\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147686 kubelet[2997]: I0317 18:03:15.147392 2997 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b71e734-ba40-479d-8236-2c82f8e70f24-hubble-tls\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147686 kubelet[2997]: I0317 18:03:15.147399 2997 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-cgroup\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147686 kubelet[2997]: I0317 18:03:15.147407 2997 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-cilium-run\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147686 kubelet[2997]: I0317 18:03:15.147452 2997 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-hostproc\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147686 kubelet[2997]: I0317 18:03:15.147461 2997 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-xtables-lock\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147686 kubelet[2997]: I0317 18:03:15.147469 2997 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-bpf-maps\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147885 kubelet[2997]: I0317 18:03:15.147479 2997 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7mql8\" (UniqueName: \"kubernetes.io/projected/4b71e734-ba40-479d-8236-2c82f8e70f24-kube-api-access-7mql8\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147885 kubelet[2997]: I0317 18:03:15.147488 2997 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-lib-modules\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147885 kubelet[2997]: I0317 18:03:15.147497 2997 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b71e734-ba40-479d-8236-2c82f8e70f24-host-proc-sys-kernel\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.147885 kubelet[2997]: I0317 18:03:15.147505 2997 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rm4kp\" (UniqueName: \"kubernetes.io/projected/f55707a4-7c0b-4f07-ab76-7a757044df26-kube-api-access-rm4kp\") on node \"ci-4152-2-2-5-05efd5484b\" DevicePath \"\"" Mar 17 18:03:15.295310 systemd[1]: Removed slice kubepods-burstable-pod4b71e734_ba40_479d_8236_2c82f8e70f24.slice - libcontainer container kubepods-burstable-pod4b71e734_ba40_479d_8236_2c82f8e70f24.slice. Mar 17 18:03:15.295508 systemd[1]: kubepods-burstable-pod4b71e734_ba40_479d_8236_2c82f8e70f24.slice: Consumed 7.943s CPU time. Mar 17 18:03:15.313088 systemd[1]: Removed slice kubepods-besteffort-podf55707a4_7c0b_4f07_ab76_7a757044df26.slice - libcontainer container kubepods-besteffort-podf55707a4_7c0b_4f07_ab76_7a757044df26.slice. Mar 17 18:03:15.754452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4-rootfs.mount: Deactivated successfully. Mar 17 18:03:15.754562 systemd[1]: var-lib-kubelet-pods-f55707a4\x2d7c0b\x2d4f07\x2dab76\x2d7a757044df26-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drm4kp.mount: Deactivated successfully. Mar 17 18:03:15.754642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d-rootfs.mount: Deactivated successfully. Mar 17 18:03:15.754732 systemd[1]: var-lib-kubelet-pods-4b71e734\x2dba40\x2d479d\x2d8236\x2d2c82f8e70f24-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7mql8.mount: Deactivated successfully. Mar 17 18:03:15.754829 systemd[1]: var-lib-kubelet-pods-4b71e734\x2dba40\x2d479d\x2d8236\x2d2c82f8e70f24-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:03:15.754929 systemd[1]: var-lib-kubelet-pods-4b71e734\x2dba40\x2d479d\x2d8236\x2d2c82f8e70f24-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:03:16.809057 sshd[4618]: Connection closed by 139.178.68.195 port 41970 Mar 17 18:03:16.809897 sshd-session[4616]: pam_unix(sshd:session): session closed for user core Mar 17 18:03:16.813016 systemd[1]: sshd@52-37.27.0.76:22-139.178.68.195:41970.service: Deactivated successfully. Mar 17 18:03:16.815390 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:03:16.817781 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:03:16.820135 systemd-logind[1487]: Removed session 20. Mar 17 18:03:16.981850 systemd[1]: Started sshd@53-37.27.0.76:22-139.178.68.195:55988.service - OpenSSH per-connection server daemon (139.178.68.195:55988). Mar 17 18:03:17.319169 kubelet[2997]: I0317 18:03:17.319129 2997 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b71e734-ba40-479d-8236-2c82f8e70f24" path="/var/lib/kubelet/pods/4b71e734-ba40-479d-8236-2c82f8e70f24/volumes" Mar 17 18:03:17.320060 kubelet[2997]: I0317 18:03:17.320038 2997 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55707a4-7c0b-4f07-ab76-7a757044df26" path="/var/lib/kubelet/pods/f55707a4-7c0b-4f07-ab76-7a757044df26/volumes" Mar 17 18:03:17.958256 sshd[4778]: Accepted publickey for core from 139.178.68.195 port 55988 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:03:17.960016 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:03:17.964481 systemd-logind[1487]: New session 21 of user core. Mar 17 18:03:17.971546 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 18:03:18.451318 kubelet[2997]: E0317 18:03:18.451258 2997 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:03:19.007959 kubelet[2997]: I0317 18:03:19.006067 2997 topology_manager.go:215] "Topology Admit Handler" podUID="1b834892-05af-413a-b6fa-fdde617a1423" podNamespace="kube-system" podName="cilium-j9nk9" Mar 17 18:03:19.007959 kubelet[2997]: E0317 18:03:19.006147 2997 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b71e734-ba40-479d-8236-2c82f8e70f24" containerName="apply-sysctl-overwrites" Mar 17 18:03:19.007959 kubelet[2997]: E0317 18:03:19.006156 2997 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b71e734-ba40-479d-8236-2c82f8e70f24" containerName="mount-bpf-fs" Mar 17 18:03:19.007959 kubelet[2997]: E0317 18:03:19.006162 2997 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b71e734-ba40-479d-8236-2c82f8e70f24" containerName="cilium-agent" Mar 17 18:03:19.007959 kubelet[2997]: E0317 18:03:19.006170 2997 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b71e734-ba40-479d-8236-2c82f8e70f24" containerName="mount-cgroup" Mar 17 18:03:19.007959 kubelet[2997]: E0317 18:03:19.006176 2997 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b71e734-ba40-479d-8236-2c82f8e70f24" containerName="clean-cilium-state" Mar 17 18:03:19.007959 kubelet[2997]: E0317 18:03:19.006182 2997 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f55707a4-7c0b-4f07-ab76-7a757044df26" containerName="cilium-operator" Mar 17 18:03:19.007959 kubelet[2997]: I0317 18:03:19.006202 2997 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b71e734-ba40-479d-8236-2c82f8e70f24" containerName="cilium-agent" Mar 17 18:03:19.007959 kubelet[2997]: I0317 18:03:19.006208 2997 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55707a4-7c0b-4f07-ab76-7a757044df26" containerName="cilium-operator" Mar 17 18:03:19.017096 systemd[1]: Created slice kubepods-burstable-pod1b834892_05af_413a_b6fa_fdde617a1423.slice - libcontainer container kubepods-burstable-pod1b834892_05af_413a_b6fa_fdde617a1423.slice. Mar 17 18:03:19.070998 kubelet[2997]: I0317 18:03:19.070941 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b834892-05af-413a-b6fa-fdde617a1423-cilium-run\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.070998 kubelet[2997]: I0317 18:03:19.070996 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b834892-05af-413a-b6fa-fdde617a1423-etc-cni-netd\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071140 kubelet[2997]: I0317 18:03:19.071019 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b834892-05af-413a-b6fa-fdde617a1423-host-proc-sys-kernel\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071140 kubelet[2997]: I0317 18:03:19.071036 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b834892-05af-413a-b6fa-fdde617a1423-cni-path\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071140 kubelet[2997]: I0317 18:03:19.071054 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qgzx\" (UniqueName: \"kubernetes.io/projected/1b834892-05af-413a-b6fa-fdde617a1423-kube-api-access-8qgzx\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071140 kubelet[2997]: I0317 18:03:19.071068 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b834892-05af-413a-b6fa-fdde617a1423-cilium-cgroup\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071140 kubelet[2997]: I0317 18:03:19.071082 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b834892-05af-413a-b6fa-fdde617a1423-hubble-tls\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071140 kubelet[2997]: I0317 18:03:19.071098 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1b834892-05af-413a-b6fa-fdde617a1423-cilium-ipsec-secrets\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071307 kubelet[2997]: I0317 18:03:19.071112 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b834892-05af-413a-b6fa-fdde617a1423-bpf-maps\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071307 kubelet[2997]: I0317 18:03:19.071126 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b834892-05af-413a-b6fa-fdde617a1423-lib-modules\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071307 kubelet[2997]: I0317 18:03:19.071139 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b834892-05af-413a-b6fa-fdde617a1423-xtables-lock\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071307 kubelet[2997]: I0317 18:03:19.071155 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b834892-05af-413a-b6fa-fdde617a1423-cilium-config-path\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071307 kubelet[2997]: I0317 18:03:19.071170 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b834892-05af-413a-b6fa-fdde617a1423-hostproc\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071307 kubelet[2997]: I0317 18:03:19.071183 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b834892-05af-413a-b6fa-fdde617a1423-clustermesh-secrets\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.071473 kubelet[2997]: I0317 18:03:19.071196 2997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b834892-05af-413a-b6fa-fdde617a1423-host-proc-sys-net\") pod \"cilium-j9nk9\" (UID: \"1b834892-05af-413a-b6fa-fdde617a1423\") " pod="kube-system/cilium-j9nk9" Mar 17 18:03:19.219834 sshd[4780]: Connection closed by 139.178.68.195 port 55988 Mar 17 18:03:19.220365 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Mar 17 18:03:19.224835 systemd[1]: sshd@53-37.27.0.76:22-139.178.68.195:55988.service: Deactivated successfully. Mar 17 18:03:19.227149 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:03:19.229917 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:03:19.231537 systemd-logind[1487]: Removed session 21. Mar 17 18:03:19.325115 containerd[1500]: time="2025-03-17T18:03:19.325076503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9nk9,Uid:1b834892-05af-413a-b6fa-fdde617a1423,Namespace:kube-system,Attempt:0,}" Mar 17 18:03:19.347580 containerd[1500]: time="2025-03-17T18:03:19.347474870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:03:19.347580 containerd[1500]: time="2025-03-17T18:03:19.347532840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:03:19.347580 containerd[1500]: time="2025-03-17T18:03:19.347542968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:03:19.347831 containerd[1500]: time="2025-03-17T18:03:19.347627648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:03:19.365559 systemd[1]: Started cri-containerd-29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d.scope - libcontainer container 29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d. Mar 17 18:03:19.400539 systemd[1]: Started sshd@54-37.27.0.76:22-139.178.68.195:56004.service - OpenSSH per-connection server daemon (139.178.68.195:56004). Mar 17 18:03:19.401646 containerd[1500]: time="2025-03-17T18:03:19.401597965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9nk9,Uid:1b834892-05af-413a-b6fa-fdde617a1423,Namespace:kube-system,Attempt:0,} returns sandbox id \"29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d\"" Mar 17 18:03:19.415870 containerd[1500]: time="2025-03-17T18:03:19.415826977Z" level=info msg="CreateContainer within sandbox \"29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:03:19.427746 containerd[1500]: time="2025-03-17T18:03:19.427694333Z" level=info msg="CreateContainer within sandbox \"29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"271e4178f2de8b0cb2bb5e76560f7b0188542e33ed4363eb593e9d8daa926488\"" Mar 17 18:03:19.428571 containerd[1500]: time="2025-03-17T18:03:19.428537267Z" level=info msg="StartContainer for \"271e4178f2de8b0cb2bb5e76560f7b0188542e33ed4363eb593e9d8daa926488\"" Mar 17 18:03:19.456597 systemd[1]: Started cri-containerd-271e4178f2de8b0cb2bb5e76560f7b0188542e33ed4363eb593e9d8daa926488.scope - libcontainer container 271e4178f2de8b0cb2bb5e76560f7b0188542e33ed4363eb593e9d8daa926488. Mar 17 18:03:19.484947 containerd[1500]: time="2025-03-17T18:03:19.483730585Z" level=info msg="StartContainer for \"271e4178f2de8b0cb2bb5e76560f7b0188542e33ed4363eb593e9d8daa926488\" returns successfully" Mar 17 18:03:19.500962 systemd[1]: cri-containerd-271e4178f2de8b0cb2bb5e76560f7b0188542e33ed4363eb593e9d8daa926488.scope: Deactivated successfully. Mar 17 18:03:19.535649 containerd[1500]: time="2025-03-17T18:03:19.535575104Z" level=info msg="shim disconnected" id=271e4178f2de8b0cb2bb5e76560f7b0188542e33ed4363eb593e9d8daa926488 namespace=k8s.io Mar 17 18:03:19.535649 containerd[1500]: time="2025-03-17T18:03:19.535642521Z" level=warning msg="cleaning up after shim disconnected" id=271e4178f2de8b0cb2bb5e76560f7b0188542e33ed4363eb593e9d8daa926488 namespace=k8s.io Mar 17 18:03:19.535649 containerd[1500]: time="2025-03-17T18:03:19.535650686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:20.002981 containerd[1500]: time="2025-03-17T18:03:20.002893459Z" level=info msg="CreateContainer within sandbox \"29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:03:20.013488 containerd[1500]: time="2025-03-17T18:03:20.013440751Z" level=info msg="CreateContainer within sandbox \"29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1e935e3bc45715c31bd6c0d3c5e78ef75e46cffa8d2c3508ce64c21a464497f5\"" Mar 17 18:03:20.014065 containerd[1500]: time="2025-03-17T18:03:20.014030395Z" level=info msg="StartContainer for \"1e935e3bc45715c31bd6c0d3c5e78ef75e46cffa8d2c3508ce64c21a464497f5\"" Mar 17 18:03:20.040573 systemd[1]: Started cri-containerd-1e935e3bc45715c31bd6c0d3c5e78ef75e46cffa8d2c3508ce64c21a464497f5.scope - libcontainer container 1e935e3bc45715c31bd6c0d3c5e78ef75e46cffa8d2c3508ce64c21a464497f5. Mar 17 18:03:20.066135 containerd[1500]: time="2025-03-17T18:03:20.065608719Z" level=info msg="StartContainer for \"1e935e3bc45715c31bd6c0d3c5e78ef75e46cffa8d2c3508ce64c21a464497f5\" returns successfully" Mar 17 18:03:20.077476 systemd[1]: cri-containerd-1e935e3bc45715c31bd6c0d3c5e78ef75e46cffa8d2c3508ce64c21a464497f5.scope: Deactivated successfully. Mar 17 18:03:20.106882 containerd[1500]: time="2025-03-17T18:03:20.106779265Z" level=info msg="shim disconnected" id=1e935e3bc45715c31bd6c0d3c5e78ef75e46cffa8d2c3508ce64c21a464497f5 namespace=k8s.io Mar 17 18:03:20.106882 containerd[1500]: time="2025-03-17T18:03:20.106857324Z" level=warning msg="cleaning up after shim disconnected" id=1e935e3bc45715c31bd6c0d3c5e78ef75e46cffa8d2c3508ce64c21a464497f5 namespace=k8s.io Mar 17 18:03:20.106882 containerd[1500]: time="2025-03-17T18:03:20.106867012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:20.375871 sshd[4829]: Accepted publickey for core from 139.178.68.195 port 56004 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:03:20.377071 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:03:20.381563 systemd-logind[1487]: New session 22 of user core. Mar 17 18:03:20.389564 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 18:03:20.806989 kubelet[2997]: I0317 18:03:20.806863 2997 setters.go:580] "Node became not ready" node="ci-4152-2-2-5-05efd5484b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:03:20Z","lastTransitionTime":"2025-03-17T18:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:03:21.024802 containerd[1500]: time="2025-03-17T18:03:21.024756816Z" level=info msg="CreateContainer within sandbox \"29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:03:21.041797 containerd[1500]: time="2025-03-17T18:03:21.041751847Z" level=info msg="CreateContainer within sandbox \"29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2dd8eb3ffb583c25691d4eee21efd7c56746538e9cf805016a57d2c595c3ee96\"" Mar 17 18:03:21.043937 containerd[1500]: time="2025-03-17T18:03:21.043900178Z" level=info msg="StartContainer for \"2dd8eb3ffb583c25691d4eee21efd7c56746538e9cf805016a57d2c595c3ee96\"" Mar 17 18:03:21.048636 sshd[4963]: Connection closed by 139.178.68.195 port 56004 Mar 17 18:03:21.048380 sshd-session[4829]: pam_unix(sshd:session): session closed for user core Mar 17 18:03:21.052997 systemd[1]: sshd@54-37.27.0.76:22-139.178.68.195:56004.service: Deactivated successfully. Mar 17 18:03:21.057638 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:03:21.060012 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:03:21.062260 systemd-logind[1487]: Removed session 22. Mar 17 18:03:21.086627 systemd[1]: Started cri-containerd-2dd8eb3ffb583c25691d4eee21efd7c56746538e9cf805016a57d2c595c3ee96.scope - libcontainer container 2dd8eb3ffb583c25691d4eee21efd7c56746538e9cf805016a57d2c595c3ee96. Mar 17 18:03:21.119442 containerd[1500]: time="2025-03-17T18:03:21.119210989Z" level=info msg="StartContainer for \"2dd8eb3ffb583c25691d4eee21efd7c56746538e9cf805016a57d2c595c3ee96\" returns successfully" Mar 17 18:03:21.127925 systemd[1]: cri-containerd-2dd8eb3ffb583c25691d4eee21efd7c56746538e9cf805016a57d2c595c3ee96.scope: Deactivated successfully. Mar 17 18:03:21.165957 containerd[1500]: time="2025-03-17T18:03:21.165888806Z" level=info msg="shim disconnected" id=2dd8eb3ffb583c25691d4eee21efd7c56746538e9cf805016a57d2c595c3ee96 namespace=k8s.io Mar 17 18:03:21.165957 containerd[1500]: time="2025-03-17T18:03:21.165952156Z" level=warning msg="cleaning up after shim disconnected" id=2dd8eb3ffb583c25691d4eee21efd7c56746538e9cf805016a57d2c595c3ee96 namespace=k8s.io Mar 17 18:03:21.165957 containerd[1500]: time="2025-03-17T18:03:21.165960161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:21.178158 systemd[1]: run-containerd-runc-k8s.io-2dd8eb3ffb583c25691d4eee21efd7c56746538e9cf805016a57d2c595c3ee96-runc.ONfspw.mount: Deactivated successfully. Mar 17 18:03:21.178560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dd8eb3ffb583c25691d4eee21efd7c56746538e9cf805016a57d2c595c3ee96-rootfs.mount: Deactivated successfully. Mar 17 18:03:21.223783 systemd[1]: Started sshd@55-37.27.0.76:22-139.178.68.195:56006.service - OpenSSH per-connection server daemon (139.178.68.195:56006). Mar 17 18:03:22.008034 containerd[1500]: time="2025-03-17T18:03:22.007968009Z" level=info msg="CreateContainer within sandbox \"29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:03:22.025246 containerd[1500]: time="2025-03-17T18:03:22.024340111Z" level=info msg="CreateContainer within sandbox \"29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2c47c3a11ae9f180a794d496426271776fa990c429eb20dd7c054d508b255347\"" Mar 17 18:03:22.024914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164653526.mount: Deactivated successfully. Mar 17 18:03:22.027598 containerd[1500]: time="2025-03-17T18:03:22.026898769Z" level=info msg="StartContainer for \"2c47c3a11ae9f180a794d496426271776fa990c429eb20dd7c054d508b255347\"" Mar 17 18:03:22.061554 systemd[1]: Started cri-containerd-2c47c3a11ae9f180a794d496426271776fa990c429eb20dd7c054d508b255347.scope - libcontainer container 2c47c3a11ae9f180a794d496426271776fa990c429eb20dd7c054d508b255347. Mar 17 18:03:22.084866 systemd[1]: cri-containerd-2c47c3a11ae9f180a794d496426271776fa990c429eb20dd7c054d508b255347.scope: Deactivated successfully. Mar 17 18:03:22.086491 containerd[1500]: time="2025-03-17T18:03:22.086261788Z" level=info msg="StartContainer for \"2c47c3a11ae9f180a794d496426271776fa990c429eb20dd7c054d508b255347\" returns successfully" Mar 17 18:03:22.114840 containerd[1500]: time="2025-03-17T18:03:22.114714219Z" level=info msg="shim disconnected" id=2c47c3a11ae9f180a794d496426271776fa990c429eb20dd7c054d508b255347 namespace=k8s.io Mar 17 18:03:22.114840 containerd[1500]: time="2025-03-17T18:03:22.114781336Z" level=warning msg="cleaning up after shim disconnected" id=2c47c3a11ae9f180a794d496426271776fa990c429eb20dd7c054d508b255347 namespace=k8s.io Mar 17 18:03:22.114840 containerd[1500]: time="2025-03-17T18:03:22.114791365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:22.178718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c47c3a11ae9f180a794d496426271776fa990c429eb20dd7c054d508b255347-rootfs.mount: Deactivated successfully. Mar 17 18:03:22.197827 sshd[5026]: Accepted publickey for core from 139.178.68.195 port 56006 ssh2: RSA SHA256:lM7Ou7OtEArfu+2yO9fKO92Z0QeSyRpQPg+BMEwQlbo Mar 17 18:03:22.199558 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:03:22.204684 systemd-logind[1487]: New session 23 of user core. Mar 17 18:03:22.213631 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 18:03:23.010892 containerd[1500]: time="2025-03-17T18:03:23.010822502Z" level=info msg="CreateContainer within sandbox \"29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:03:23.025860 containerd[1500]: time="2025-03-17T18:03:23.025761203Z" level=info msg="CreateContainer within sandbox \"29776ee1996e29cc06b7aceeb077342a51c2e25b850e3b01bb8a8323223c1a4d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5f22df6069c3778aade0071ae21869c48aebaeffaafe5f82ec01f1c9df7121ac\"" Mar 17 18:03:23.029107 containerd[1500]: time="2025-03-17T18:03:23.027514077Z" level=info msg="StartContainer for \"5f22df6069c3778aade0071ae21869c48aebaeffaafe5f82ec01f1c9df7121ac\"" Mar 17 18:03:23.029824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1089074661.mount: Deactivated successfully. Mar 17 18:03:23.058560 systemd[1]: Started cri-containerd-5f22df6069c3778aade0071ae21869c48aebaeffaafe5f82ec01f1c9df7121ac.scope - libcontainer container 5f22df6069c3778aade0071ae21869c48aebaeffaafe5f82ec01f1c9df7121ac. Mar 17 18:03:23.088654 containerd[1500]: time="2025-03-17T18:03:23.088590044Z" level=info msg="StartContainer for \"5f22df6069c3778aade0071ae21869c48aebaeffaafe5f82ec01f1c9df7121ac\" returns successfully" Mar 17 18:03:23.680722 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:03:24.028761 kubelet[2997]: I0317 18:03:24.027347 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j9nk9" podStartSLOduration=6.027328925 podStartE2EDuration="6.027328925s" podCreationTimestamp="2025-03-17 18:03:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:03:24.026904473 +0000 UTC m=+350.816994640" watchObservedRunningTime="2025-03-17 18:03:24.027328925 +0000 UTC m=+350.817419082" Mar 17 18:03:26.658617 systemd-networkd[1403]: lxc_health: Link UP Mar 17 18:03:26.672588 systemd-networkd[1403]: lxc_health: Gained carrier Mar 17 18:03:27.923605 systemd-networkd[1403]: lxc_health: Gained IPv6LL Mar 17 18:03:33.339931 containerd[1500]: time="2025-03-17T18:03:33.339880765Z" level=info msg="StopPodSandbox for \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\"" Mar 17 18:03:33.340365 containerd[1500]: time="2025-03-17T18:03:33.340001064Z" level=info msg="TearDown network for sandbox \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\" successfully" Mar 17 18:03:33.340365 containerd[1500]: time="2025-03-17T18:03:33.340011605Z" level=info msg="StopPodSandbox for \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\" returns successfully" Mar 17 18:03:33.340790 containerd[1500]: time="2025-03-17T18:03:33.340762662Z" level=info msg="RemovePodSandbox for \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\"" Mar 17 18:03:33.340836 containerd[1500]: time="2025-03-17T18:03:33.340791447Z" level=info msg="Forcibly stopping sandbox \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\"" Mar 17 18:03:33.340884 containerd[1500]: time="2025-03-17T18:03:33.340846352Z" level=info msg="TearDown network for sandbox \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\" successfully" Mar 17 18:03:33.369122 containerd[1500]: time="2025-03-17T18:03:33.369068571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:03:33.369278 containerd[1500]: time="2025-03-17T18:03:33.369151801Z" level=info msg="RemovePodSandbox \"fe4126c5150428cb7d28ed1ca76e632095cb3c83ce34a55831854dd53e5e26c4\" returns successfully" Mar 17 18:03:33.369665 containerd[1500]: time="2025-03-17T18:03:33.369638642Z" level=info msg="StopPodSandbox for \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\"" Mar 17 18:03:33.369740 containerd[1500]: time="2025-03-17T18:03:33.369718474Z" level=info msg="TearDown network for sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" successfully" Mar 17 18:03:33.369740 containerd[1500]: time="2025-03-17T18:03:33.369734094Z" level=info msg="StopPodSandbox for \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" returns successfully" Mar 17 18:03:33.369963 containerd[1500]: time="2025-03-17T18:03:33.369935740Z" level=info msg="RemovePodSandbox for \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\"" Mar 17 18:03:33.369963 containerd[1500]: time="2025-03-17T18:03:33.369959115Z" level=info msg="Forcibly stopping sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\"" Mar 17 18:03:33.370042 containerd[1500]: time="2025-03-17T18:03:33.370004451Z" level=info msg="TearDown network for sandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" successfully" Mar 17 18:03:33.373232 containerd[1500]: time="2025-03-17T18:03:33.373199481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:03:33.373317 containerd[1500]: time="2025-03-17T18:03:33.373243736Z" level=info msg="RemovePodSandbox \"1401b246f2e19d6dff4cff130d3ab68093f87001646a514725b79ac66dc68d1d\" returns successfully" Mar 17 18:03:33.845087 sshd[5084]: Connection closed by 139.178.68.195 port 56006 Mar 17 18:03:33.846190 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Mar 17 18:03:33.852212 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:03:33.853312 systemd[1]: sshd@55-37.27.0.76:22-139.178.68.195:56006.service: Deactivated successfully. Mar 17 18:03:33.856358 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:03:33.857597 systemd-logind[1487]: Removed session 23. Mar 17 18:03:49.880960 kubelet[2997]: E0317 18:03:49.880815 2997 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39082->10.0.0.2:2379: read: connection timed out" Mar 17 18:03:49.883979 systemd[1]: cri-containerd-854d72ccaca78e9e07581fdd1752e1c32e5b66b16393e0cebd2f2c08ac9e41e8.scope: Deactivated successfully. Mar 17 18:03:49.884645 systemd[1]: cri-containerd-854d72ccaca78e9e07581fdd1752e1c32e5b66b16393e0cebd2f2c08ac9e41e8.scope: Consumed 2.033s CPU time, 17.4M memory peak, 0B memory swap peak. Mar 17 18:03:49.907382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-854d72ccaca78e9e07581fdd1752e1c32e5b66b16393e0cebd2f2c08ac9e41e8-rootfs.mount: Deactivated successfully. Mar 17 18:03:49.915516 containerd[1500]: time="2025-03-17T18:03:49.915458918Z" level=info msg="shim disconnected" id=854d72ccaca78e9e07581fdd1752e1c32e5b66b16393e0cebd2f2c08ac9e41e8 namespace=k8s.io Mar 17 18:03:49.916069 containerd[1500]: time="2025-03-17T18:03:49.916022815Z" level=warning msg="cleaning up after shim disconnected" id=854d72ccaca78e9e07581fdd1752e1c32e5b66b16393e0cebd2f2c08ac9e41e8 namespace=k8s.io Mar 17 18:03:49.916069 containerd[1500]: time="2025-03-17T18:03:49.916054876Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:50.055116 kubelet[2997]: I0317 18:03:50.055081 2997 scope.go:117] "RemoveContainer" containerID="854d72ccaca78e9e07581fdd1752e1c32e5b66b16393e0cebd2f2c08ac9e41e8" Mar 17 18:03:50.059084 containerd[1500]: time="2025-03-17T18:03:50.059049448Z" level=info msg="CreateContainer within sandbox \"8d140408838995f5a2db482d1339af13170316f3ab9f9c3836cf80c280d70ef5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 18:03:50.072878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494463074.mount: Deactivated successfully. Mar 17 18:03:50.075929 containerd[1500]: time="2025-03-17T18:03:50.075890411Z" level=info msg="CreateContainer within sandbox \"8d140408838995f5a2db482d1339af13170316f3ab9f9c3836cf80c280d70ef5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"bf897d4863401461e106c47ba56b8276ce2cc494249b52626f3c39f5bf553dca\"" Mar 17 18:03:50.076567 containerd[1500]: time="2025-03-17T18:03:50.076323609Z" level=info msg="StartContainer for \"bf897d4863401461e106c47ba56b8276ce2cc494249b52626f3c39f5bf553dca\"" Mar 17 18:03:50.105560 systemd[1]: Started cri-containerd-bf897d4863401461e106c47ba56b8276ce2cc494249b52626f3c39f5bf553dca.scope - libcontainer container bf897d4863401461e106c47ba56b8276ce2cc494249b52626f3c39f5bf553dca. Mar 17 18:03:50.147136 containerd[1500]: time="2025-03-17T18:03:50.147009774Z" level=info msg="StartContainer for \"bf897d4863401461e106c47ba56b8276ce2cc494249b52626f3c39f5bf553dca\" returns successfully" Mar 17 18:03:50.623899 systemd[1]: cri-containerd-39c77818fc2a1cc89a1cb4df3227e6b119b581afab29cc9345f4b9ae646ffba4.scope: Deactivated successfully. Mar 17 18:03:50.624612 systemd[1]: cri-containerd-39c77818fc2a1cc89a1cb4df3227e6b119b581afab29cc9345f4b9ae646ffba4.scope: Consumed 4.507s CPU time, 21.5M memory peak, 0B memory swap peak. Mar 17 18:03:50.655723 containerd[1500]: time="2025-03-17T18:03:50.655647447Z" level=info msg="shim disconnected" id=39c77818fc2a1cc89a1cb4df3227e6b119b581afab29cc9345f4b9ae646ffba4 namespace=k8s.io Mar 17 18:03:50.655925 containerd[1500]: time="2025-03-17T18:03:50.655818814Z" level=warning msg="cleaning up after shim disconnected" id=39c77818fc2a1cc89a1cb4df3227e6b119b581afab29cc9345f4b9ae646ffba4 namespace=k8s.io Mar 17 18:03:50.655925 containerd[1500]: time="2025-03-17T18:03:50.655832490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:03:50.907281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39c77818fc2a1cc89a1cb4df3227e6b119b581afab29cc9345f4b9ae646ffba4-rootfs.mount: Deactivated successfully. Mar 17 18:03:51.059844 kubelet[2997]: I0317 18:03:51.059811 2997 scope.go:117] "RemoveContainer" containerID="39c77818fc2a1cc89a1cb4df3227e6b119b581afab29cc9345f4b9ae646ffba4" Mar 17 18:03:51.061647 containerd[1500]: time="2025-03-17T18:03:51.061600787Z" level=info msg="CreateContainer within sandbox \"f36d90cecc52dc6873b648066bfd0ab045dff65d25fe7d773587adf091000576\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 18:03:51.074515 containerd[1500]: time="2025-03-17T18:03:51.074472758Z" level=info msg="CreateContainer within sandbox \"f36d90cecc52dc6873b648066bfd0ab045dff65d25fe7d773587adf091000576\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0f68090b3a00ad95c7497dee6610e3e1f5b6cb1e4290d8011ba2b9b3a89303ef\"" Mar 17 18:03:51.075042 containerd[1500]: time="2025-03-17T18:03:51.075017588Z" level=info msg="StartContainer for \"0f68090b3a00ad95c7497dee6610e3e1f5b6cb1e4290d8011ba2b9b3a89303ef\"" Mar 17 18:03:51.108980 systemd[1]: Started cri-containerd-0f68090b3a00ad95c7497dee6610e3e1f5b6cb1e4290d8011ba2b9b3a89303ef.scope - libcontainer container 0f68090b3a00ad95c7497dee6610e3e1f5b6cb1e4290d8011ba2b9b3a89303ef. Mar 17 18:03:51.163998 containerd[1500]: time="2025-03-17T18:03:51.163370963Z" level=info msg="StartContainer for \"0f68090b3a00ad95c7497dee6610e3e1f5b6cb1e4290d8011ba2b9b3a89303ef\" returns successfully" Mar 17 18:03:51.548110 kubelet[2997]: E0317 18:03:51.543593 2997 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38888->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-2-5-05efd5484b.182da9283af89a13 kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-2-5-05efd5484b,UID:8a8cbeac4acd1939a3f0aed9dfb3b5cf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-5-05efd5484b,},FirstTimestamp:2025-03-17 18:03:41.115406867 +0000 UTC m=+367.905497024,LastTimestamp:2025-03-17 18:03:41.115406867 +0000 UTC m=+367.905497024,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-5-05efd5484b,}"