Dec 13 01:29:24.911292 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:29:24.911313 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:24.911321 kernel: BIOS-provided physical RAM map: Dec 13 01:29:24.911327 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:29:24.911332 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:29:24.911337 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:29:24.911343 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 13 01:29:24.911348 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 13 01:29:24.911355 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:29:24.911360 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:29:24.911365 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:29:24.911370 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:29:24.911375 kernel: NX (Execute Disable) protection: active Dec 13 01:29:24.911381 kernel: APIC: Static calls initialized Dec 13 01:29:24.911389 kernel: SMBIOS 2.8 present. Dec 13 01:29:24.911395 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 13 01:29:24.911401 kernel: Hypervisor detected: KVM Dec 13 01:29:24.911406 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:29:24.911449 kernel: kvm-clock: using sched offset of 2989904239 cycles Dec 13 01:29:24.911457 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:29:24.911463 kernel: tsc: Detected 2445.404 MHz processor Dec 13 01:29:24.911469 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:29:24.911475 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:29:24.911483 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 13 01:29:24.911489 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:29:24.911495 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:29:24.911500 kernel: Using GB pages for direct mapping Dec 13 01:29:24.911506 kernel: ACPI: Early table checksum verification disabled Dec 13 01:29:24.911511 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 13 01:29:24.911517 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:24.911522 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:24.911528 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:24.911535 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 13 01:29:24.911541 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:24.911547 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:24.911552 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:24.911558 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:24.911568 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 13 01:29:24.911583 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 13 01:29:24.911595 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 13 01:29:24.911610 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 13 01:29:24.911616 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 13 01:29:24.911622 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 13 01:29:24.911628 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 13 01:29:24.911634 kernel: No NUMA configuration found Dec 13 01:29:24.911640 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 13 01:29:24.911647 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 13 01:29:24.911654 kernel: Zone ranges: Dec 13 01:29:24.911659 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:29:24.911665 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 13 01:29:24.911671 kernel: Normal empty Dec 13 01:29:24.911677 kernel: Movable zone start for each node Dec 13 01:29:24.911683 kernel: Early memory node ranges Dec 13 01:29:24.911689 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:29:24.911694 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 13 01:29:24.911700 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 13 01:29:24.911708 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:29:24.911714 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:29:24.911720 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:29:24.911725 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:29:24.911731 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:29:24.911737 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:29:24.911743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:29:24.911749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:29:24.911754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:29:24.911762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:29:24.911768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:29:24.911774 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:29:24.911783 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:29:24.911794 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:29:24.911804 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:29:24.911815 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:29:24.911826 kernel: Booting paravirtualized kernel on KVM Dec 13 01:29:24.911836 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:29:24.911845 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:29:24.911851 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:29:24.911856 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:29:24.911863 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:29:24.911868 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 01:29:24.911875 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:24.911882 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:29:24.911888 kernel: random: crng init done Dec 13 01:29:24.911895 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:29:24.911903 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:29:24.911918 kernel: Fallback order for Node 0: 0 Dec 13 01:29:24.911931 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 13 01:29:24.911943 kernel: Policy zone: DMA32 Dec 13 01:29:24.911954 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:29:24.911961 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 01:29:24.911968 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:29:24.911974 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:29:24.911984 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:29:24.911990 kernel: Dynamic Preempt: voluntary Dec 13 01:29:24.911996 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:29:24.912002 kernel: rcu: RCU event tracing is enabled. Dec 13 01:29:24.912008 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:29:24.912014 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:29:24.912020 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:29:24.912026 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:29:24.912032 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:29:24.912038 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:29:24.912046 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:29:24.912052 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:29:24.912058 kernel: Console: colour VGA+ 80x25 Dec 13 01:29:24.912064 kernel: printk: console [tty0] enabled Dec 13 01:29:24.912069 kernel: printk: console [ttyS0] enabled Dec 13 01:29:24.912075 kernel: ACPI: Core revision 20230628 Dec 13 01:29:24.912081 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:29:24.912087 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:29:24.912093 kernel: x2apic enabled Dec 13 01:29:24.912101 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:29:24.912107 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:29:24.912112 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:29:24.912118 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Dec 13 01:29:24.912124 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:29:24.912130 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:29:24.912136 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:29:24.912142 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:29:24.912157 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:29:24.912163 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:29:24.912169 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:29:24.912177 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:29:24.912183 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:29:24.912190 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:29:24.912196 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:29:24.912202 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:29:24.912209 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:29:24.912215 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:29:24.912221 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:29:24.912230 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:29:24.912236 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:29:24.912242 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:29:24.912248 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:29:24.912254 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:29:24.912262 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:29:24.912269 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:29:24.912275 kernel: landlock: Up and running. Dec 13 01:29:24.912281 kernel: SELinux: Initializing. Dec 13 01:29:24.912287 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:24.912293 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:24.912299 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:29:24.912305 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:24.912311 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:24.912319 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:24.912326 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:29:24.912332 kernel: ... version: 0 Dec 13 01:29:24.912338 kernel: ... bit width: 48 Dec 13 01:29:24.912344 kernel: ... generic registers: 6 Dec 13 01:29:24.912350 kernel: ... value mask: 0000ffffffffffff Dec 13 01:29:24.912356 kernel: ... max period: 00007fffffffffff Dec 13 01:29:24.912362 kernel: ... fixed-purpose events: 0 Dec 13 01:29:24.912368 kernel: ... event mask: 000000000000003f Dec 13 01:29:24.912376 kernel: signal: max sigframe size: 1776 Dec 13 01:29:24.912382 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:29:24.912388 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:29:24.912394 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:29:24.912400 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:29:24.912406 kernel: .... node #0, CPUs: #1 Dec 13 01:29:24.912457 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:29:24.912464 kernel: smpboot: Max logical packages: 1 Dec 13 01:29:24.912470 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Dec 13 01:29:24.912476 kernel: devtmpfs: initialized Dec 13 01:29:24.912485 kernel: x86/mm: Memory block size: 128MB Dec 13 01:29:24.912491 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:29:24.912498 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:29:24.912504 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:29:24.912510 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:29:24.912516 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:29:24.912522 kernel: audit: type=2000 audit(1734053363.234:1): state=initialized audit_enabled=0 res=1 Dec 13 01:29:24.912528 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:29:24.912535 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:29:24.912543 kernel: cpuidle: using governor menu Dec 13 01:29:24.912549 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:29:24.912555 kernel: dca service started, version 1.12.1 Dec 13 01:29:24.912561 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:29:24.912567 kernel: PCI: Using configuration type 1 for base access Dec 13 01:29:24.912574 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:29:24.912581 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:29:24.912593 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:29:24.912613 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:29:24.912625 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:29:24.912637 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:29:24.912648 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:29:24.912660 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:29:24.912671 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:29:24.912683 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:29:24.912692 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:29:24.912701 kernel: ACPI: Interpreter enabled Dec 13 01:29:24.912712 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:29:24.912728 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:29:24.912740 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:29:24.912750 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:29:24.912760 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:29:24.912770 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:29:24.912995 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:29:24.913114 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:29:24.913246 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:29:24.913255 kernel: PCI host bridge to bus 0000:00 Dec 13 01:29:24.913364 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:29:24.913798 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:29:24.913925 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:29:24.914073 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 13 01:29:24.914182 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:24.914282 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:24.914376 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:29:24.914520 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:29:24.914701 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 13 01:29:24.914822 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 13 01:29:24.914928 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 13 01:29:24.915028 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 13 01:29:24.915134 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 13 01:29:24.915236 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:29:24.915346 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:24.915527 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 13 01:29:24.915675 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:24.915786 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 13 01:29:24.915918 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:24.916041 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 13 01:29:24.916159 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:24.916263 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 13 01:29:24.916372 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:24.916501 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 13 01:29:24.916621 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:24.916726 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 13 01:29:24.916836 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:24.916938 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 13 01:29:24.917048 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:24.917150 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 13 01:29:24.917286 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:24.917391 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 13 01:29:24.917537 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:29:24.917643 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:29:24.917753 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:29:24.917855 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 13 01:29:24.917961 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 13 01:29:24.918073 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:29:24.918218 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:29:24.918403 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:24.919206 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 13 01:29:24.919317 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 01:29:24.919444 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 13 01:29:24.919835 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:24.919965 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:24.920110 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:24.920233 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 01:29:24.920344 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 13 01:29:24.920495 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:24.920607 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:24.920708 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:24.920822 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 01:29:24.920928 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 13 01:29:24.921034 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 13 01:29:24.921135 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:24.923470 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:24.923611 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:24.923748 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 01:29:24.923898 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 01:29:24.924011 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:24.924144 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:24.924274 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:24.925470 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 01:29:24.925597 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 13 01:29:24.925708 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:24.925811 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:24.925913 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:24.926029 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 01:29:24.926137 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 13 01:29:24.926265 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 13 01:29:24.926394 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:24.927325 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:24.927466 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:24.927477 kernel: acpiphp: Slot [0] registered Dec 13 01:29:24.927620 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:24.927740 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 13 01:29:24.927847 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 13 01:29:24.927953 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 13 01:29:24.928059 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:24.928168 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:24.928300 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:24.928312 kernel: acpiphp: Slot [0-2] registered Dec 13 01:29:24.930446 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:24.930571 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:24.930677 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:24.930686 kernel: acpiphp: Slot [0-3] registered Dec 13 01:29:24.930789 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:24.930897 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:24.930999 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:24.931008 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:29:24.931015 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:29:24.931021 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:29:24.931027 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:29:24.931034 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:29:24.931040 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:29:24.931046 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:29:24.931055 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:29:24.931062 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:29:24.931068 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:29:24.931074 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:29:24.931080 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:29:24.931087 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:29:24.931093 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:29:24.931099 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:29:24.931105 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:29:24.931114 kernel: iommu: Default domain type: Translated Dec 13 01:29:24.931120 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:29:24.931126 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:29:24.931132 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:29:24.931138 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:29:24.931144 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 13 01:29:24.931246 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:29:24.931391 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:29:24.931593 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:29:24.931617 kernel: vgaarb: loaded Dec 13 01:29:24.931629 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:29:24.931640 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:29:24.931650 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:29:24.931661 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:29:24.931672 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:29:24.931683 kernel: pnp: PnP ACPI init Dec 13 01:29:24.931862 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:29:24.931880 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:29:24.931887 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:29:24.931893 kernel: NET: Registered PF_INET protocol family Dec 13 01:29:24.931900 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:29:24.931906 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:29:24.931913 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:29:24.931919 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:29:24.931925 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:29:24.931932 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:29:24.931940 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:24.931946 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:24.931952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:29:24.931959 kernel: NET: Registered PF_XDP protocol family Dec 13 01:29:24.932066 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:29:24.932170 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:29:24.932273 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:29:24.933454 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 01:29:24.933581 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 01:29:24.933686 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 01:29:24.933789 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:24.933892 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:24.934037 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:24.934145 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:24.934248 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:24.934356 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:24.936611 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:24.936723 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:24.936826 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:24.936927 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:24.937029 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:24.937130 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:24.937310 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:24.937546 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:24.937652 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:24.937752 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:24.937852 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:24.937951 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:24.938051 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:24.938150 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 13 01:29:24.938249 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:24.938348 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:24.938484 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:24.938587 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 13 01:29:24.938686 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:24.938785 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:24.938884 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:24.938983 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 13 01:29:24.939082 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:24.939186 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:24.939282 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:29:24.939375 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:29:24.940527 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:29:24.940654 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 13 01:29:24.940752 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:24.940845 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:24.940952 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:24.941053 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:24.941160 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:24.941287 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:24.941396 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:24.943554 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:24.943699 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:24.943824 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:24.943939 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:24.944165 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:24.944301 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 01:29:24.944403 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:24.945041 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 13 01:29:24.945145 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:24.945268 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:24.945380 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 13 01:29:24.945504 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:24.945604 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:24.945708 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 13 01:29:24.945806 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:24.945903 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:24.945913 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:29:24.945920 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:29:24.945930 kernel: Initialise system trusted keyrings Dec 13 01:29:24.945937 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:29:24.945944 kernel: Key type asymmetric registered Dec 13 01:29:24.945950 kernel: Asymmetric key parser 'x509' registered Dec 13 01:29:24.945957 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:29:24.945963 kernel: io scheduler mq-deadline registered Dec 13 01:29:24.945970 kernel: io scheduler kyber registered Dec 13 01:29:24.945977 kernel: io scheduler bfq registered Dec 13 01:29:24.946081 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 01:29:24.946206 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 01:29:24.946340 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 01:29:24.948522 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 01:29:24.948660 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 01:29:24.948765 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 01:29:24.948868 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 01:29:24.948971 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 01:29:24.949076 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 01:29:24.949203 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 01:29:24.949309 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 01:29:24.949430 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 01:29:24.949566 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 01:29:24.949672 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 01:29:24.949775 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 01:29:24.949878 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 01:29:24.949892 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:29:24.949995 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 13 01:29:24.950146 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 13 01:29:24.950156 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:29:24.950163 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 13 01:29:24.950170 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:29:24.950180 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:29:24.950187 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:29:24.950193 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:29:24.950200 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:29:24.950325 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 01:29:24.950340 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:29:24.951524 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 01:29:24.951667 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T01:29:24 UTC (1734053364) Dec 13 01:29:24.951769 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:29:24.951779 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:29:24.951787 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:29:24.951794 kernel: Segment Routing with IPv6 Dec 13 01:29:24.951807 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:29:24.951814 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:29:24.951821 kernel: Key type dns_resolver registered Dec 13 01:29:24.951828 kernel: IPI shorthand broadcast: enabled Dec 13 01:29:24.951835 kernel: sched_clock: Marking stable (1091008583, 130266464)->(1231719142, -10444095) Dec 13 01:29:24.951842 kernel: registered taskstats version 1 Dec 13 01:29:24.951848 kernel: Loading compiled-in X.509 certificates Dec 13 01:29:24.951855 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:29:24.951862 kernel: Key type .fscrypt registered Dec 13 01:29:24.951871 kernel: Key type fscrypt-provisioning registered Dec 13 01:29:24.951878 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:29:24.951885 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:29:24.951892 kernel: ima: No architecture policies found Dec 13 01:29:24.951899 kernel: clk: Disabling unused clocks Dec 13 01:29:24.951906 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:29:24.951913 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:29:24.951920 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:29:24.951926 kernel: Run /init as init process Dec 13 01:29:24.951936 kernel: with arguments: Dec 13 01:29:24.951943 kernel: /init Dec 13 01:29:24.951950 kernel: with environment: Dec 13 01:29:24.951956 kernel: HOME=/ Dec 13 01:29:24.951963 kernel: TERM=linux Dec 13 01:29:24.951970 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:29:24.951979 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:24.951988 systemd[1]: Detected virtualization kvm. Dec 13 01:29:24.951998 systemd[1]: Detected architecture x86-64. Dec 13 01:29:24.952005 systemd[1]: Running in initrd. Dec 13 01:29:24.952012 systemd[1]: No hostname configured, using default hostname. Dec 13 01:29:24.952019 systemd[1]: Hostname set to . Dec 13 01:29:24.952026 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:24.952034 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:29:24.952041 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:24.952049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:24.952060 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:29:24.952067 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:24.952075 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:29:24.952082 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:29:24.952091 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:29:24.952098 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:29:24.952105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:24.952115 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:24.952122 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:24.952129 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:24.952136 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:24.952144 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:24.952151 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:24.952158 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:24.952165 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:29:24.952175 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:29:24.952182 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:24.952189 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:24.952197 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:24.952204 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:24.952211 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:29:24.952219 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:24.952226 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:29:24.952233 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:29:24.952243 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:24.952250 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:24.952276 systemd-journald[187]: Collecting audit messages is disabled. Dec 13 01:29:24.952296 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:24.952306 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:24.952313 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:24.952321 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:29:24.952328 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:24.952339 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:29:24.952346 kernel: Bridge firewalling registered Dec 13 01:29:24.952353 systemd-journald[187]: Journal started Dec 13 01:29:24.952370 systemd-journald[187]: Runtime Journal (/run/log/journal/8a1608eafee246a497b72c5ebefa4bdf) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:24.921676 systemd-modules-load[188]: Inserted module 'overlay' Dec 13 01:29:24.950247 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 13 01:29:24.992434 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:24.992570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:24.993838 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:24.996844 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:25.003586 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:25.004850 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:25.006547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:25.016554 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:25.021492 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:25.032699 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:29:25.034930 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:25.036156 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:25.037474 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:25.044872 dracut-cmdline[218]: dracut-dracut-053 Dec 13 01:29:25.047392 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:25.045565 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:25.072847 systemd-resolved[227]: Positive Trust Anchors: Dec 13 01:29:25.072864 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:25.072890 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:25.075138 systemd-resolved[227]: Defaulting to hostname 'linux'. Dec 13 01:29:25.076131 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:25.082621 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:25.126440 kernel: SCSI subsystem initialized Dec 13 01:29:25.134439 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:29:25.144461 kernel: iscsi: registered transport (tcp) Dec 13 01:29:25.162547 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:29:25.162587 kernel: QLogic iSCSI HBA Driver Dec 13 01:29:25.207526 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:25.212569 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:29:25.236535 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:29:25.236591 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:29:25.237692 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:29:25.277452 kernel: raid6: avx2x4 gen() 32241 MB/s Dec 13 01:29:25.294443 kernel: raid6: avx2x2 gen() 29711 MB/s Dec 13 01:29:25.311548 kernel: raid6: avx2x1 gen() 24322 MB/s Dec 13 01:29:25.311584 kernel: raid6: using algorithm avx2x4 gen() 32241 MB/s Dec 13 01:29:25.329628 kernel: raid6: .... xor() 4528 MB/s, rmw enabled Dec 13 01:29:25.329656 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:29:25.348483 kernel: xor: automatically using best checksumming function avx Dec 13 01:29:25.482465 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:29:25.495231 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:25.504572 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:25.517943 systemd-udevd[406]: Using default interface naming scheme 'v255'. Dec 13 01:29:25.521665 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:25.529579 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:29:25.542651 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Dec 13 01:29:25.572935 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:25.578573 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:25.642002 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:25.650577 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:29:25.661332 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:25.664481 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:25.666565 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:25.667004 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:25.674552 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:29:25.688175 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:25.730576 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:29:25.744003 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:25.744706 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:29:25.744941 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:25.749750 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:29:25.748187 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:25.748661 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:25.748823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:25.749298 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:25.757925 kernel: AES CTR mode by8 optimization enabled Dec 13 01:29:25.757950 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:25.758146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:25.826381 kernel: ACPI: bus type USB registered Dec 13 01:29:25.826498 kernel: usbcore: registered new interface driver usbfs Dec 13 01:29:25.827464 kernel: usbcore: registered new interface driver hub Dec 13 01:29:25.828520 kernel: usbcore: registered new device driver usb Dec 13 01:29:25.834619 kernel: libata version 3.00 loaded. Dec 13 01:29:25.873457 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:29:25.936728 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:29:25.936745 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:29:25.936894 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:29:25.937020 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:25.937170 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 01:29:25.937293 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 01:29:25.937445 kernel: scsi host1: ahci Dec 13 01:29:25.937633 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:25.937763 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 01:29:25.937885 kernel: scsi host2: ahci Dec 13 01:29:25.938009 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 01:29:25.938130 kernel: hub 1-0:1.0: USB hub found Dec 13 01:29:25.938277 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 13 01:29:25.938458 kernel: scsi host3: ahci Dec 13 01:29:25.938591 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 01:29:25.938738 kernel: hub 1-0:1.0: 4 ports detected Dec 13 01:29:25.938875 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:29:25.939013 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 01:29:25.939186 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 13 01:29:25.939324 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:29:25.939484 kernel: scsi host4: ahci Dec 13 01:29:25.939618 kernel: hub 2-0:1.0: USB hub found Dec 13 01:29:25.939762 kernel: hub 2-0:1.0: 4 ports detected Dec 13 01:29:25.939893 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:29:25.939903 kernel: GPT:17805311 != 80003071 Dec 13 01:29:25.939912 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:29:25.939920 kernel: scsi host5: ahci Dec 13 01:29:25.940048 kernel: GPT:17805311 != 80003071 Dec 13 01:29:25.940057 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:29:25.940066 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:25.940075 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:29:25.940211 kernel: scsi host6: ahci Dec 13 01:29:25.940335 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Dec 13 01:29:25.940345 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Dec 13 01:29:25.940353 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Dec 13 01:29:25.940365 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Dec 13 01:29:25.940374 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Dec 13 01:29:25.940382 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Dec 13 01:29:25.878525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:25.892927 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:25.924734 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:25.962449 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (462) Dec 13 01:29:25.969432 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (465) Dec 13 01:29:25.970788 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 01:29:25.977658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 01:29:25.985347 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 01:29:25.985982 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 01:29:25.991075 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:25.996551 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:29:26.003001 disk-uuid[573]: Primary Header is updated. Dec 13 01:29:26.003001 disk-uuid[573]: Secondary Entries is updated. Dec 13 01:29:26.003001 disk-uuid[573]: Secondary Header is updated. Dec 13 01:29:26.007435 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:26.013439 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:26.019432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:26.168443 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 01:29:26.244263 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:26.244331 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:29:26.246491 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:26.246525 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:26.248665 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:26.248685 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:26.248695 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:29:26.250543 kernel: ata1.00: applying bridge limits Dec 13 01:29:26.251444 kernel: ata1.00: configured for UDMA/100 Dec 13 01:29:26.252445 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:26.300288 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:29:26.315817 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:29:26.315840 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:29:26.315854 kernel: usbcore: registered new interface driver usbhid Dec 13 01:29:26.315868 kernel: usbhid: USB HID core driver Dec 13 01:29:26.315881 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 13 01:29:26.315895 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:29:26.316125 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 01:29:27.024023 disk-uuid[574]: The operation has completed successfully. Dec 13 01:29:27.025537 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:27.097546 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:29:27.097662 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:29:27.106581 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:29:27.111505 sh[595]: Success Dec 13 01:29:27.123491 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:29:27.174921 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:29:27.182515 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:29:27.184693 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:29:27.208841 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:29:27.208877 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:27.208887 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:29:27.210567 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:29:27.211743 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:29:27.220444 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:29:27.222448 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:29:27.223648 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:29:27.234557 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:29:27.237549 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:29:27.251312 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:27.251358 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:27.251370 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:27.258266 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:27.258291 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:27.267944 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:29:27.270436 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:27.276002 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:29:27.281605 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:29:27.342722 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:27.349574 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:27.372001 ignition[695]: Ignition 2.19.0 Dec 13 01:29:27.372010 ignition[695]: Stage: fetch-offline Dec 13 01:29:27.372040 ignition[695]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:27.373275 systemd-networkd[776]: lo: Link UP Dec 13 01:29:27.372049 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:27.373279 systemd-networkd[776]: lo: Gained carrier Dec 13 01:29:27.372561 ignition[695]: parsed url from cmdline: "" Dec 13 01:29:27.375762 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:27.372566 ignition[695]: no config URL provided Dec 13 01:29:27.376510 systemd-networkd[776]: Enumeration completed Dec 13 01:29:27.372572 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:27.377662 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:27.372582 ignition[695]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:27.377666 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:27.372587 ignition[695]: failed to fetch config: resource requires networking Dec 13 01:29:27.378526 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:27.372737 ignition[695]: Ignition finished successfully Dec 13 01:29:27.378530 systemd-networkd[776]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:27.378943 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:27.379562 systemd-networkd[776]: eth0: Link UP Dec 13 01:29:27.379566 systemd-networkd[776]: eth0: Gained carrier Dec 13 01:29:27.379573 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:27.380831 systemd[1]: Reached target network.target - Network. Dec 13 01:29:27.383651 systemd-networkd[776]: eth1: Link UP Dec 13 01:29:27.383655 systemd-networkd[776]: eth1: Gained carrier Dec 13 01:29:27.383662 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:27.387783 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:29:27.402849 ignition[783]: Ignition 2.19.0 Dec 13 01:29:27.402857 ignition[783]: Stage: fetch Dec 13 01:29:27.403025 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:27.403036 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:27.403114 ignition[783]: parsed url from cmdline: "" Dec 13 01:29:27.403121 ignition[783]: no config URL provided Dec 13 01:29:27.403129 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:27.403143 ignition[783]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:27.403160 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 01:29:27.403385 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 01:29:27.418450 systemd-networkd[776]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:27.518479 systemd-networkd[776]: eth0: DHCPv4 address 78.47.218.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:27.604407 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 01:29:27.608530 ignition[783]: GET result: OK Dec 13 01:29:27.608685 ignition[783]: parsing config with SHA512: 47b895f1fe046206ee572ad67485bd561fabc1f22c38ac2e6a629436173951d31270225aba774bb3530b6fa71dac1fc32732f4bcf640a261304353af5f946b41 Dec 13 01:29:27.614892 unknown[783]: fetched base config from "system" Dec 13 01:29:27.614903 unknown[783]: fetched base config from "system" Dec 13 01:29:27.615401 ignition[783]: fetch: fetch complete Dec 13 01:29:27.614910 unknown[783]: fetched user config from "hetzner" Dec 13 01:29:27.615407 ignition[783]: fetch: fetch passed Dec 13 01:29:27.615477 ignition[783]: Ignition finished successfully Dec 13 01:29:27.619472 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:29:27.623568 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:29:27.653208 ignition[790]: Ignition 2.19.0 Dec 13 01:29:27.653224 ignition[790]: Stage: kargs Dec 13 01:29:27.653492 ignition[790]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:27.653509 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:27.656714 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:29:27.654573 ignition[790]: kargs: kargs passed Dec 13 01:29:27.654632 ignition[790]: Ignition finished successfully Dec 13 01:29:27.671676 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:29:27.685597 ignition[796]: Ignition 2.19.0 Dec 13 01:29:27.685607 ignition[796]: Stage: disks Dec 13 01:29:27.685745 ignition[796]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:27.687888 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:29:27.685755 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:27.686451 ignition[796]: disks: disks passed Dec 13 01:29:27.689242 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:27.686487 ignition[796]: Ignition finished successfully Dec 13 01:29:27.690619 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:27.691651 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:27.692738 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:27.693668 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:27.701535 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:29:27.717752 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:29:27.720627 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:29:27.723508 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:29:27.803451 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:29:27.803574 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:29:27.804772 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:27.810488 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:27.813520 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:29:27.817025 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:29:27.818943 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:29:27.818991 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:27.829921 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (813) Dec 13 01:29:27.829954 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:27.831767 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:27.831745 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:29:27.839592 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:27.839611 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:27.839620 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:27.844556 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:29:27.846140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:27.879925 coreos-metadata[815]: Dec 13 01:29:27.879 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 01:29:27.881799 coreos-metadata[815]: Dec 13 01:29:27.881 INFO Fetch successful Dec 13 01:29:27.883762 coreos-metadata[815]: Dec 13 01:29:27.882 INFO wrote hostname ci-4081-2-1-e-82283d2dd3 to /sysroot/etc/hostname Dec 13 01:29:27.883243 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:27.886046 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:29:27.887881 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:29:27.891321 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:29:27.895207 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:29:27.974482 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:27.979496 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:29:27.984292 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:29:27.990438 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:28.008141 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:29:28.010275 ignition[930]: INFO : Ignition 2.19.0 Dec 13 01:29:28.010275 ignition[930]: INFO : Stage: mount Dec 13 01:29:28.011311 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:28.011311 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:28.011311 ignition[930]: INFO : mount: mount passed Dec 13 01:29:28.011311 ignition[930]: INFO : Ignition finished successfully Dec 13 01:29:28.012795 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:29:28.018554 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:29:28.206714 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:29:28.211593 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:28.225468 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Dec 13 01:29:28.227708 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:28.227740 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:28.229747 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:28.233452 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:28.233476 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:28.237072 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:28.270407 ignition[959]: INFO : Ignition 2.19.0 Dec 13 01:29:28.270407 ignition[959]: INFO : Stage: files Dec 13 01:29:28.272130 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:28.272130 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:28.274213 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:29:28.274213 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:29:28.274213 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:29:28.278331 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:29:28.279642 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:29:28.279642 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:29:28.279201 unknown[959]: wrote ssh authorized keys file for user: core Dec 13 01:29:28.282981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:29:28.282981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:29:28.282981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:28.282981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:29:28.382842 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:29:28.547637 systemd-networkd[776]: eth0: Gained IPv6LL Dec 13 01:29:28.575116 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:28.576581 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:29:28.576581 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:29:29.129669 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 01:29:29.187846 systemd-networkd[776]: eth1: Gained IPv6LL Dec 13 01:29:29.214693 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:29:29.215722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:29.215722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:29.215722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:29.215722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:29.215722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:29.215722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:29.215722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:29.215722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:29.215722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:29.215722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:29.215722 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:29.227992 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:29.227992 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:29.227992 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:29:29.595695 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 01:29:29.871221 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:29.871221 ignition[959]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:29.874534 ignition[959]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:29.874534 ignition[959]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:29.874534 ignition[959]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:29.874534 ignition[959]: INFO : files: files passed Dec 13 01:29:29.874534 ignition[959]: INFO : Ignition finished successfully Dec 13 01:29:29.875465 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:29:29.886066 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:29:29.889466 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:29:29.890870 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:29:29.891442 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:29:29.901205 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:29.902178 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:29.903101 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:29.903293 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:29.904679 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:29:29.909523 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:29:29.927028 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:29:29.927144 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:29:29.928638 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:29:29.929148 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:29:29.930167 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:29:29.935517 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:29:29.945495 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:29.950541 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:29:29.958192 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:29.958747 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:29.959770 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:29:29.960743 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:29:29.960845 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:29.961979 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:29:29.962622 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:29:29.963583 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:29:29.964384 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:29.965260 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:29.966232 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:29:29.967203 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:29.968204 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:29:29.969185 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:29:29.970197 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:29:29.971083 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:29:29.971173 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:29.972227 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:29.972863 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:29.973736 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:29:29.975631 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:29.976355 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:29:29.976460 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:29.977723 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:29:29.977852 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:29.978497 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:29:29.978632 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:29:29.979455 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:29:29.979581 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:29.987824 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:29:29.990652 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:29:29.991150 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:29:29.991299 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:29.993309 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:29:29.993451 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:30.001213 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:29:30.001350 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:29:30.006063 ignition[1013]: INFO : Ignition 2.19.0 Dec 13 01:29:30.006063 ignition[1013]: INFO : Stage: umount Dec 13 01:29:30.006063 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:30.006063 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:30.006063 ignition[1013]: INFO : umount: umount passed Dec 13 01:29:30.006063 ignition[1013]: INFO : Ignition finished successfully Dec 13 01:29:30.008784 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:29:30.008898 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:29:30.012538 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:29:30.012612 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:29:30.013233 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:29:30.013283 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:29:30.015689 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:29:30.015734 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:29:30.016470 systemd[1]: Stopped target network.target - Network. Dec 13 01:29:30.017119 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:29:30.017187 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:30.017758 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:29:30.018320 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:29:30.020336 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:30.023338 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:29:30.026917 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:29:30.029174 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:29:30.029234 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:30.032015 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:29:30.032063 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:30.032643 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:29:30.032691 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:29:30.036516 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:29:30.036563 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:30.037157 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:29:30.044751 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:29:30.045455 systemd-networkd[776]: eth0: DHCPv6 lease lost Dec 13 01:29:30.048091 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:29:30.050376 systemd-networkd[776]: eth1: DHCPv6 lease lost Dec 13 01:29:30.051802 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:29:30.051935 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:29:30.055629 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:29:30.055741 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:29:30.064964 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:29:30.065014 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:30.070555 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:29:30.071196 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:29:30.071245 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:30.071769 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:29:30.071813 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:30.073266 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:29:30.073310 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:30.074136 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:29:30.074178 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:30.075262 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:30.077373 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:29:30.077482 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:29:30.086170 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:29:30.086248 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:30.087301 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:29:30.087399 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:29:30.088089 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:29:30.088231 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:30.089907 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:29:30.089968 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:30.090949 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:29:30.090985 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:30.091775 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:29:30.091818 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:30.093218 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:29:30.093261 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:30.094201 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:30.094247 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:30.104574 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:29:30.106677 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:29:30.106727 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:30.107734 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:29:30.107778 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:30.108722 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:29:30.108764 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:30.110697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:30.110740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:30.113490 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:29:30.113590 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:29:30.114941 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:29:30.122789 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:29:30.130223 systemd[1]: Switching root. Dec 13 01:29:30.157292 systemd-journald[187]: Journal stopped Dec 13 01:29:31.066982 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 13 01:29:31.067051 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:29:31.067064 kernel: SELinux: policy capability open_perms=1 Dec 13 01:29:31.067082 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:29:31.067091 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:29:31.067105 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:29:31.067114 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:29:31.067123 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:29:31.067132 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:29:31.067142 kernel: audit: type=1403 audit(1734053370.310:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:29:31.067152 systemd[1]: Successfully loaded SELinux policy in 39.153ms. Dec 13 01:29:31.067175 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.965ms. Dec 13 01:29:31.067186 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:31.067196 systemd[1]: Detected virtualization kvm. Dec 13 01:29:31.067209 systemd[1]: Detected architecture x86-64. Dec 13 01:29:31.067219 systemd[1]: Detected first boot. Dec 13 01:29:31.067229 systemd[1]: Hostname set to . Dec 13 01:29:31.067239 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:31.067249 zram_generator::config[1073]: No configuration found. Dec 13 01:29:31.067260 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:29:31.067270 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:29:31.067280 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:29:31.067297 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:29:31.067307 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:29:31.067316 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:29:31.067326 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:29:31.067338 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:29:31.067349 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:29:31.067358 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:29:31.067368 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:29:31.067381 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:31.067391 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:31.067402 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:29:31.067532 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:29:31.067551 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:29:31.067562 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:31.067573 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:29:31.067583 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:31.067593 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:29:31.067607 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:31.067617 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:31.067627 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:31.067637 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:31.067647 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:29:31.067657 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:29:31.067675 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:29:31.067688 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:29:31.067698 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:31.067708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:31.067718 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:31.067728 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:29:31.067738 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:29:31.067845 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:29:31.067859 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:29:31.067869 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:31.067880 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:29:31.067890 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:29:31.067900 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:29:31.067910 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:29:31.067920 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:31.067933 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:31.067943 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:29:31.067954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:31.067964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:31.067974 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:31.067985 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:29:31.067995 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:31.068005 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:31.068015 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:29:31.068028 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:29:31.068039 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:31.068049 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:31.068150 kernel: fuse: init (API version 7.39) Dec 13 01:29:31.068168 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:29:31.068197 systemd-journald[1175]: Collecting audit messages is disabled. Dec 13 01:29:31.068217 kernel: loop: module loaded Dec 13 01:29:31.068231 systemd-journald[1175]: Journal started Dec 13 01:29:31.068249 systemd-journald[1175]: Runtime Journal (/run/log/journal/8a1608eafee246a497b72c5ebefa4bdf) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:31.073444 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:29:31.080442 kernel: ACPI: bus type drm_connector registered Dec 13 01:29:31.089905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:31.089943 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:31.098483 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:31.097487 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:29:31.097999 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:29:31.098538 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:29:31.099026 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:29:31.099610 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:29:31.100273 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:29:31.101027 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:29:31.101807 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:31.102546 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:29:31.102721 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:29:31.103494 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:31.103663 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:31.104674 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:31.104863 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:31.105649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:31.105821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:31.106566 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:29:31.106739 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:29:31.107533 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:31.107722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:31.109927 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:31.110779 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:29:31.112838 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:29:31.125297 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:29:31.131505 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:29:31.133886 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:29:31.135479 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:31.142545 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:29:31.146543 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:29:31.149503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:31.157930 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:29:31.160303 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:31.165640 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:31.171538 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:31.176733 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:29:31.177547 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:29:31.186899 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:29:31.189112 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:29:31.193281 systemd-journald[1175]: Time spent on flushing to /var/log/journal/8a1608eafee246a497b72c5ebefa4bdf is 24.962ms for 1130 entries. Dec 13 01:29:31.193281 systemd-journald[1175]: System Journal (/var/log/journal/8a1608eafee246a497b72c5ebefa4bdf) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:29:31.229901 systemd-journald[1175]: Received client request to flush runtime journal. Dec 13 01:29:31.214255 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:31.229937 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Dec 13 01:29:31.229948 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Dec 13 01:29:31.232254 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:29:31.244741 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:31.252864 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:29:31.257742 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:31.265624 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:29:31.274877 udevadm[1236]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:29:31.288625 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:29:31.295592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:31.312594 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Dec 13 01:29:31.312906 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Dec 13 01:29:31.317208 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:31.611312 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:29:31.616620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:31.640510 systemd-udevd[1245]: Using default interface naming scheme 'v255'. Dec 13 01:29:31.661241 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:31.671546 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:31.694026 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:29:31.749445 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1253) Dec 13 01:29:31.760351 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1253) Dec 13 01:29:31.759000 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:29:31.760744 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:29:31.822773 systemd-networkd[1249]: lo: Link UP Dec 13 01:29:31.823076 systemd-networkd[1249]: lo: Gained carrier Dec 13 01:29:31.827037 systemd-networkd[1249]: Enumeration completed Dec 13 01:29:31.827321 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:31.832594 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:31.832649 systemd-networkd[1249]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:31.833798 systemd-networkd[1249]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:31.833868 systemd-networkd[1249]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:31.834680 systemd-networkd[1249]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:31.834906 systemd-networkd[1249]: eth0: Link UP Dec 13 01:29:31.834952 systemd-networkd[1249]: eth0: Gained carrier Dec 13 01:29:31.834994 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:31.835553 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:29:31.838437 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 01:29:31.840670 systemd-networkd[1249]: eth1: Link UP Dec 13 01:29:31.841450 systemd-networkd[1249]: eth1: Gained carrier Dec 13 01:29:31.841508 systemd-networkd[1249]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:31.844702 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:31.849434 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:29:31.864477 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1258) Dec 13 01:29:31.884458 systemd-networkd[1249]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:31.889479 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:29:31.894482 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:31.894621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:31.900617 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:31.904536 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:31.914754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:31.919409 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:31.919487 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:31.919524 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:31.919918 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:31.920098 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:31.948470 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 01:29:31.949491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:31.949693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:31.950659 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:31.950864 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:31.960491 systemd-networkd[1249]: eth0: DHCPv4 address 78.47.218.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:31.966956 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:29:31.967170 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:29:31.968667 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:29:31.964310 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:31.968223 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:31.968256 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:31.978433 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 13 01:29:31.981448 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 13 01:29:31.985593 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:29:31.987002 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 01:29:31.987043 kernel: [drm] features: -context_init Dec 13 01:29:31.994446 kernel: [drm] number of scanouts: 1 Dec 13 01:29:31.994473 kernel: [drm] number of cap sets: 0 Dec 13 01:29:31.996452 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 01:29:31.997459 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:29:32.010723 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 01:29:32.010759 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 01:29:32.010866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:32.021442 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 01:29:32.040448 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:32.040763 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:32.049590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:32.104519 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:32.154007 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:29:32.167532 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:29:32.182448 lvm[1312]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:32.213261 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:29:32.213509 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:32.218572 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:29:32.222728 lvm[1315]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:32.255047 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:29:32.255295 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:32.255401 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:29:32.255449 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:32.255539 systemd[1]: Reached target machines.target - Containers. Dec 13 01:29:32.257548 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:29:32.271574 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:29:32.273266 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:29:32.273508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:32.275810 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:29:32.281645 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:29:32.286282 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:29:32.294891 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:29:32.304523 kernel: loop0: detected capacity change from 0 to 8 Dec 13 01:29:32.308639 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:29:32.322632 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:29:32.328895 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:29:32.330287 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:29:32.341518 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:29:32.376441 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 01:29:32.408442 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 01:29:32.452347 kernel: loop4: detected capacity change from 0 to 8 Dec 13 01:29:32.456438 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 01:29:32.474025 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 01:29:32.493581 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 01:29:32.509599 (sd-merge)[1336]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 01:29:32.510143 (sd-merge)[1336]: Merged extensions into '/usr'. Dec 13 01:29:32.514350 systemd[1]: Reloading requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:29:32.514362 systemd[1]: Reloading... Dec 13 01:29:32.590172 zram_generator::config[1376]: No configuration found. Dec 13 01:29:32.656226 ldconfig[1319]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:29:32.695083 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:32.760484 systemd[1]: Reloading finished in 245 ms. Dec 13 01:29:32.781733 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:29:32.786596 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:29:32.794568 systemd[1]: Starting ensure-sysext.service... Dec 13 01:29:32.797573 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:32.803768 systemd[1]: Reloading requested from client PID 1414 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:29:32.803851 systemd[1]: Reloading... Dec 13 01:29:32.818128 systemd-tmpfiles[1415]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:29:32.818449 systemd-tmpfiles[1415]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:29:32.819240 systemd-tmpfiles[1415]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:29:32.819503 systemd-tmpfiles[1415]: ACLs are not supported, ignoring. Dec 13 01:29:32.819568 systemd-tmpfiles[1415]: ACLs are not supported, ignoring. Dec 13 01:29:32.822857 systemd-tmpfiles[1415]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:32.822868 systemd-tmpfiles[1415]: Skipping /boot Dec 13 01:29:32.834514 systemd-tmpfiles[1415]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:32.834526 systemd-tmpfiles[1415]: Skipping /boot Dec 13 01:29:32.873869 zram_generator::config[1446]: No configuration found. Dec 13 01:29:32.969698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:33.023716 systemd[1]: Reloading finished in 219 ms. Dec 13 01:29:33.044652 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:33.055557 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:33.076581 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:29:33.079436 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:29:33.088095 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:33.092577 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:29:33.095575 systemd-networkd[1249]: eth0: Gained IPv6LL Dec 13 01:29:33.100340 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:33.101136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:33.108503 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:33.117600 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:33.126606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:33.127104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:33.127190 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:33.128060 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:29:33.135580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:33.135754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:33.136581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:33.136765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:33.149271 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:33.149470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:33.156016 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:33.156212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:33.165114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:33.173673 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:33.177672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:33.178148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:33.178229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:33.190404 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:29:33.191370 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:29:33.199504 systemd-resolved[1505]: Positive Trust Anchors: Dec 13 01:29:33.199518 systemd-resolved[1505]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:33.199544 systemd-resolved[1505]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:33.201698 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:33.201894 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:33.205207 systemd-resolved[1505]: Using system hostname 'ci-4081-2-1-e-82283d2dd3'. Dec 13 01:29:33.207163 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:33.207341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:33.210959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:33.213793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:33.213990 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:33.225176 augenrules[1540]: No rules Dec 13 01:29:33.226755 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:33.229274 systemd[1]: Reached target network.target - Network. Dec 13 01:29:33.230321 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:29:33.230733 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:33.231164 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:33.231307 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:33.236573 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:33.239742 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:33.245476 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:33.250405 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:33.250974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:33.259911 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:29:33.263294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:33.264208 systemd[1]: Finished ensure-sysext.service. Dec 13 01:29:33.265944 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:33.266121 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:33.267874 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:33.268049 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:33.269652 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:33.269827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:33.270509 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:33.270680 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:33.275689 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:33.275771 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:33.284665 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:29:33.287160 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:29:33.307540 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:29:33.309948 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:33.342768 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:29:33.343355 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:33.343885 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:29:33.344386 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:29:33.347834 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:29:33.348272 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:29:33.348303 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:33.348716 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:29:33.349261 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:29:33.352960 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:29:33.353360 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:33.358086 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:29:33.360760 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:29:33.369849 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:29:33.370745 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:29:33.371209 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:33.372642 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:33.373170 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:29:33.373205 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:33.373229 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:33.376488 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:29:33.379657 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:29:33.384539 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:29:33.391491 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:29:33.401542 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:29:33.403089 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:29:33.409518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:33.415886 jq[1577]: false Dec 13 01:29:33.428951 extend-filesystems[1580]: Found loop4 Dec 13 01:29:33.428951 extend-filesystems[1580]: Found loop5 Dec 13 01:29:33.428951 extend-filesystems[1580]: Found loop6 Dec 13 01:29:33.428951 extend-filesystems[1580]: Found loop7 Dec 13 01:29:33.428951 extend-filesystems[1580]: Found sda Dec 13 01:29:33.428951 extend-filesystems[1580]: Found sda1 Dec 13 01:29:33.428951 extend-filesystems[1580]: Found sda2 Dec 13 01:29:33.428951 extend-filesystems[1580]: Found sda3 Dec 13 01:29:33.428951 extend-filesystems[1580]: Found usr Dec 13 01:29:33.428951 extend-filesystems[1580]: Found sda4 Dec 13 01:29:33.428951 extend-filesystems[1580]: Found sda6 Dec 13 01:29:33.428951 extend-filesystems[1580]: Found sda7 Dec 13 01:29:33.428951 extend-filesystems[1580]: Found sda9 Dec 13 01:29:33.428951 extend-filesystems[1580]: Checking size of /dev/sda9 Dec 13 01:29:33.418901 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:29:33.460303 coreos-metadata[1574]: Dec 13 01:29:33.415 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 01:29:33.460303 coreos-metadata[1574]: Dec 13 01:29:33.417 INFO Fetch successful Dec 13 01:29:33.460303 coreos-metadata[1574]: Dec 13 01:29:33.417 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 01:29:33.460303 coreos-metadata[1574]: Dec 13 01:29:33.418 INFO Fetch successful Dec 13 01:29:33.433874 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:29:33.447732 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:29:33.452689 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 01:29:33.464029 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:29:33.467653 extend-filesystems[1580]: Resized partition /dev/sda9 Dec 13 01:29:33.474757 extend-filesystems[1598]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:29:33.476525 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:29:33.485931 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 01:29:33.488830 dbus-daemon[1575]: [system] SELinux support is enabled Dec 13 01:29:33.494289 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:29:33.496677 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:29:33.501741 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:29:33.511796 systemd-timesyncd[1565]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Dec 13 01:29:33.511853 systemd-timesyncd[1565]: Initial clock synchronization to Fri 2024-12-13 01:29:33.231854 UTC. Dec 13 01:29:33.513500 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:29:33.520807 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:29:33.531254 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:29:33.534688 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:29:33.535234 jq[1613]: true Dec 13 01:29:33.547835 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:29:33.548095 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:29:33.560749 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:29:33.565177 update_engine[1611]: I20241213 01:29:33.565112 1611 main.cc:92] Flatcar Update Engine starting Dec 13 01:29:33.571855 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:29:33.572365 update_engine[1611]: I20241213 01:29:33.572160 1611 update_check_scheduler.cc:74] Next update check in 3m57s Dec 13 01:29:33.572118 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:29:33.579445 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1260) Dec 13 01:29:33.599248 (ntainerd)[1627]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:29:33.618476 jq[1626]: true Dec 13 01:29:33.657570 systemd-logind[1605]: New seat seat0. Dec 13 01:29:33.658519 tar[1625]: linux-amd64/helm Dec 13 01:29:33.671553 systemd-networkd[1249]: eth1: Gained IPv6LL Dec 13 01:29:33.673148 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:29:33.674225 systemd-logind[1605]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:29:33.674244 systemd-logind[1605]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:29:33.680362 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:29:33.681096 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:29:33.681119 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:29:33.683065 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:29:33.683084 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:29:33.692930 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 01:29:33.690789 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:29:33.693949 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:29:33.721614 extend-filesystems[1598]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:29:33.721614 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 01:29:33.721614 extend-filesystems[1598]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 01:29:33.743431 extend-filesystems[1580]: Resized filesystem in /dev/sda9 Dec 13 01:29:33.743431 extend-filesystems[1580]: Found sr0 Dec 13 01:29:33.729276 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:29:33.729589 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:29:33.745621 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:29:33.748186 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:29:33.764226 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:29:33.764593 bash[1670]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:33.767038 sshd_keygen[1622]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:29:33.780745 systemd[1]: Starting sshkeys.service... Dec 13 01:29:33.811456 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:29:33.823820 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:29:33.855350 coreos-metadata[1684]: Dec 13 01:29:33.855 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 01:29:33.856946 coreos-metadata[1684]: Dec 13 01:29:33.856 INFO Fetch successful Dec 13 01:29:33.859653 unknown[1684]: wrote ssh authorized keys file for user: core Dec 13 01:29:33.867187 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:29:33.878162 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:29:33.899133 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:29:33.899396 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:29:33.907339 locksmithd[1651]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:29:33.913646 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:29:33.932982 update-ssh-keys[1698]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:33.932614 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:29:33.941815 systemd[1]: Finished sshkeys.service. Dec 13 01:29:33.949593 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:29:33.962841 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:29:33.973510 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:29:33.974101 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:29:33.989313 containerd[1627]: time="2024-12-13T01:29:33.989045842Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:29:34.017930 containerd[1627]: time="2024-12-13T01:29:34.017899887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020430 containerd[1627]: time="2024-12-13T01:29:34.019427364Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020430 containerd[1627]: time="2024-12-13T01:29:34.019452253Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:29:34.020430 containerd[1627]: time="2024-12-13T01:29:34.019466861Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:29:34.020430 containerd[1627]: time="2024-12-13T01:29:34.019615471Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:29:34.020430 containerd[1627]: time="2024-12-13T01:29:34.019629778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020430 containerd[1627]: time="2024-12-13T01:29:34.019689075Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020430 containerd[1627]: time="2024-12-13T01:29:34.019699958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020430 containerd[1627]: time="2024-12-13T01:29:34.019920867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020430 containerd[1627]: time="2024-12-13T01:29:34.019934884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020430 containerd[1627]: time="2024-12-13T01:29:34.019946908Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020430 containerd[1627]: time="2024-12-13T01:29:34.019954938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020643 containerd[1627]: time="2024-12-13T01:29:34.020037044Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020643 containerd[1627]: time="2024-12-13T01:29:34.020245350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020699 containerd[1627]: time="2024-12-13T01:29:34.020408103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:34.020746 containerd[1627]: time="2024-12-13T01:29:34.020733978Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:29:34.020875 containerd[1627]: time="2024-12-13T01:29:34.020860448Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:29:34.020968 containerd[1627]: time="2024-12-13T01:29:34.020955565Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:29:34.028243 containerd[1627]: time="2024-12-13T01:29:34.028225183Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:29:34.028337 containerd[1627]: time="2024-12-13T01:29:34.028323291Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:29:34.028390 containerd[1627]: time="2024-12-13T01:29:34.028377403Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:29:34.028461 containerd[1627]: time="2024-12-13T01:29:34.028448841Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:29:34.028506 containerd[1627]: time="2024-12-13T01:29:34.028495824Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:29:34.028664 containerd[1627]: time="2024-12-13T01:29:34.028648846Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:29:34.028935 containerd[1627]: time="2024-12-13T01:29:34.028919873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:29:34.029134 containerd[1627]: time="2024-12-13T01:29:34.029119588Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:29:34.029205 containerd[1627]: time="2024-12-13T01:29:34.029193647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:29:34.029273 containerd[1627]: time="2024-12-13T01:29:34.029245052Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:29:34.029320 containerd[1627]: time="2024-12-13T01:29:34.029309408Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:29:34.029384 containerd[1627]: time="2024-12-13T01:29:34.029372227Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:29:34.029514 containerd[1627]: time="2024-12-13T01:29:34.029448337Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:29:34.029514 containerd[1627]: time="2024-12-13T01:29:34.029478914Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:29:34.029514 containerd[1627]: time="2024-12-13T01:29:34.029492669Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:29:34.029660 containerd[1627]: time="2024-12-13T01:29:34.029586762Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:29:34.029660 containerd[1627]: time="2024-12-13T01:29:34.029604232Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:29:34.029660 containerd[1627]: time="2024-12-13T01:29:34.029615550Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:29:34.029660 containerd[1627]: time="2024-12-13T01:29:34.029636376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029744263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029769471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029781805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029791459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029803986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029823420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029834003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029844227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029855796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029865799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029874678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029884101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029896356Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029912172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.029970 containerd[1627]: time="2024-12-13T01:29:34.029921807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.030199 containerd[1627]: time="2024-12-13T01:29:34.029930523Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:29:34.030378 containerd[1627]: time="2024-12-13T01:29:34.030240146Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:29:34.030378 containerd[1627]: time="2024-12-13T01:29:34.030258206Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:29:34.030378 containerd[1627]: time="2024-12-13T01:29:34.030267889Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:29:34.030378 containerd[1627]: time="2024-12-13T01:29:34.030276866Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:29:34.030378 containerd[1627]: time="2024-12-13T01:29:34.030344406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.030378 containerd[1627]: time="2024-12-13T01:29:34.030356120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:29:34.030591 containerd[1627]: time="2024-12-13T01:29:34.030430411Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:29:34.030591 containerd[1627]: time="2024-12-13T01:29:34.030444215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:29:34.031006 containerd[1627]: time="2024-12-13T01:29:34.030842214Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:29:34.031006 containerd[1627]: time="2024-12-13T01:29:34.030908061Z" level=info msg="Connect containerd service" Dec 13 01:29:34.031006 containerd[1627]: time="2024-12-13T01:29:34.030936606Z" level=info msg="using legacy CRI server" Dec 13 01:29:34.031006 containerd[1627]: time="2024-12-13T01:29:34.030960654Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:29:34.032584 containerd[1627]: time="2024-12-13T01:29:34.032524918Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:29:34.034026 containerd[1627]: time="2024-12-13T01:29:34.033974262Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:34.034183 containerd[1627]: time="2024-12-13T01:29:34.034077128Z" level=info msg="Start subscribing containerd event" Dec 13 01:29:34.034183 containerd[1627]: time="2024-12-13T01:29:34.034127816Z" level=info msg="Start recovering state" Dec 13 01:29:34.034323 containerd[1627]: time="2024-12-13T01:29:34.034255688Z" level=info msg="Start event monitor" Dec 13 01:29:34.034323 containerd[1627]: time="2024-12-13T01:29:34.034274377Z" level=info msg="Start snapshots syncer" Dec 13 01:29:34.034323 containerd[1627]: time="2024-12-13T01:29:34.034287426Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:29:34.034323 containerd[1627]: time="2024-12-13T01:29:34.034293782Z" level=info msg="Start streaming server" Dec 13 01:29:34.034931 containerd[1627]: time="2024-12-13T01:29:34.034853045Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:29:34.037625 containerd[1627]: time="2024-12-13T01:29:34.034991760Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:29:34.035150 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:29:34.037755 containerd[1627]: time="2024-12-13T01:29:34.037740474Z" level=info msg="containerd successfully booted in 0.049508s" Dec 13 01:29:34.253426 tar[1625]: linux-amd64/LICENSE Dec 13 01:29:34.253426 tar[1625]: linux-amd64/README.md Dec 13 01:29:34.266111 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:29:34.689535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:34.690422 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:29:34.693242 systemd[1]: Startup finished in 6.834s (kernel) + 4.421s (userspace) = 11.255s. Dec 13 01:29:34.704946 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:35.306312 kubelet[1733]: E1213 01:29:35.306222 1733 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:35.310317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:35.310583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:45.561150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:45.569024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:45.703195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:45.708235 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:45.757348 kubelet[1758]: E1213 01:29:45.757243 1758 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:45.762260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:45.762552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:56.013366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:29:56.020668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:56.174389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:56.185807 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:56.228585 kubelet[1779]: E1213 01:29:56.228489 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:56.232187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:56.232529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:06.377016 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:30:06.383564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:06.526557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:06.528057 (kubelet)[1800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:06.579555 kubelet[1800]: E1213 01:30:06.579472 1800 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:06.583739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:06.584080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:16.627359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:30:16.634753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:16.784569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:16.793767 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:16.830939 kubelet[1821]: E1213 01:30:16.830873 1821 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:16.834724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:16.835037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:18.449810 update_engine[1611]: I20241213 01:30:18.449687 1611 update_attempter.cc:509] Updating boot flags... Dec 13 01:30:18.536506 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1839) Dec 13 01:30:18.587388 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1838) Dec 13 01:30:18.632435 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1838) Dec 13 01:30:26.876959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:30:26.884472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:27.024549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:27.026229 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:27.068278 kubelet[1863]: E1213 01:30:27.068216 1863 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:27.073064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:27.073319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:37.127379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:30:37.139658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:37.279660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:37.283302 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:37.325684 kubelet[1885]: E1213 01:30:37.325628 1885 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:37.329275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:37.329548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:47.377126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 01:30:47.382590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:47.543248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:47.552865 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:47.594133 kubelet[1906]: E1213 01:30:47.594073 1906 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:47.598184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:47.598439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:57.626977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 01:30:57.633573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:57.759575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:57.770723 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:57.805829 kubelet[1927]: E1213 01:30:57.805765 1927 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:57.809908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:57.810144 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:07.877289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 01:31:07.888642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:08.067717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:08.071299 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:08.121197 kubelet[1948]: E1213 01:31:08.121133 1948 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:08.125032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:08.125288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:18.127167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 01:31:18.133948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:18.288558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:18.300082 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:18.363031 kubelet[1969]: E1213 01:31:18.362937 1969 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:18.368029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:18.368331 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:28.377402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 01:31:28.384637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:28.568519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:28.581879 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:28.620405 kubelet[1990]: E1213 01:31:28.620333 1990 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:28.628361 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:28.628719 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:28.754295 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:31:28.762796 systemd[1]: Started sshd@0-78.47.218.196:22-147.75.109.163:60446.service - OpenSSH per-connection server daemon (147.75.109.163:60446). Dec 13 01:31:29.759819 sshd[1999]: Accepted publickey for core from 147.75.109.163 port 60446 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:29.763936 sshd[1999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:29.779459 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:31:29.787739 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:31:29.792028 systemd-logind[1605]: New session 1 of user core. Dec 13 01:31:29.816302 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:31:29.829231 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:31:29.835347 (systemd)[2005]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:31:29.949538 systemd[2005]: Queued start job for default target default.target. Dec 13 01:31:29.949902 systemd[2005]: Created slice app.slice - User Application Slice. Dec 13 01:31:29.949920 systemd[2005]: Reached target paths.target - Paths. Dec 13 01:31:29.949932 systemd[2005]: Reached target timers.target - Timers. Dec 13 01:31:29.954682 systemd[2005]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:31:29.963320 systemd[2005]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:31:29.963495 systemd[2005]: Reached target sockets.target - Sockets. Dec 13 01:31:29.963536 systemd[2005]: Reached target basic.target - Basic System. Dec 13 01:31:29.963621 systemd[2005]: Reached target default.target - Main User Target. Dec 13 01:31:29.963692 systemd[2005]: Startup finished in 116ms. Dec 13 01:31:29.964482 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:31:29.972722 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:31:30.666934 systemd[1]: Started sshd@1-78.47.218.196:22-147.75.109.163:60448.service - OpenSSH per-connection server daemon (147.75.109.163:60448). Dec 13 01:31:31.675103 sshd[2017]: Accepted publickey for core from 147.75.109.163 port 60448 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:31.677370 sshd[2017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:31.684945 systemd-logind[1605]: New session 2 of user core. Dec 13 01:31:31.691773 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:31:32.365593 sshd[2017]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:32.369208 systemd[1]: sshd@1-78.47.218.196:22-147.75.109.163:60448.service: Deactivated successfully. Dec 13 01:31:32.375295 systemd-logind[1605]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:31:32.375881 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:31:32.377354 systemd-logind[1605]: Removed session 2. Dec 13 01:31:32.529888 systemd[1]: Started sshd@2-78.47.218.196:22-147.75.109.163:60464.service - OpenSSH per-connection server daemon (147.75.109.163:60464). Dec 13 01:31:33.526788 sshd[2025]: Accepted publickey for core from 147.75.109.163 port 60464 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:33.529847 sshd[2025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:33.536382 systemd-logind[1605]: New session 3 of user core. Dec 13 01:31:33.545886 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:31:34.205497 sshd[2025]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:34.212309 systemd[1]: sshd@2-78.47.218.196:22-147.75.109.163:60464.service: Deactivated successfully. Dec 13 01:31:34.215860 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:31:34.217192 systemd-logind[1605]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:31:34.218892 systemd-logind[1605]: Removed session 3. Dec 13 01:31:34.371687 systemd[1]: Started sshd@3-78.47.218.196:22-147.75.109.163:60474.service - OpenSSH per-connection server daemon (147.75.109.163:60474). Dec 13 01:31:35.363259 sshd[2033]: Accepted publickey for core from 147.75.109.163 port 60474 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:35.365229 sshd[2033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:35.371898 systemd-logind[1605]: New session 4 of user core. Dec 13 01:31:35.374771 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:31:36.056595 sshd[2033]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:36.063861 systemd[1]: sshd@3-78.47.218.196:22-147.75.109.163:60474.service: Deactivated successfully. Dec 13 01:31:36.068339 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:31:36.069899 systemd-logind[1605]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:31:36.071640 systemd-logind[1605]: Removed session 4. Dec 13 01:31:36.222869 systemd[1]: Started sshd@4-78.47.218.196:22-147.75.109.163:60476.service - OpenSSH per-connection server daemon (147.75.109.163:60476). Dec 13 01:31:37.212668 sshd[2041]: Accepted publickey for core from 147.75.109.163 port 60476 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:37.215649 sshd[2041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:37.222547 systemd-logind[1605]: New session 5 of user core. Dec 13 01:31:37.237854 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:31:37.754054 sudo[2045]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:31:37.754769 sudo[2045]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:37.775143 sudo[2045]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:37.935730 sshd[2041]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:37.940912 systemd[1]: sshd@4-78.47.218.196:22-147.75.109.163:60476.service: Deactivated successfully. Dec 13 01:31:37.948602 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:31:37.949067 systemd-logind[1605]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:31:37.952047 systemd-logind[1605]: Removed session 5. Dec 13 01:31:38.104632 systemd[1]: Started sshd@5-78.47.218.196:22-147.75.109.163:57692.service - OpenSSH per-connection server daemon (147.75.109.163:57692). Dec 13 01:31:38.877229 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 01:31:38.888062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:39.030241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:39.036756 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:39.071781 kubelet[2064]: E1213 01:31:39.071725 2064 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:39.076930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:39.077136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:39.085549 sshd[2050]: Accepted publickey for core from 147.75.109.163 port 57692 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:39.087025 sshd[2050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:39.091039 systemd-logind[1605]: New session 6 of user core. Dec 13 01:31:39.100637 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:31:39.616221 sudo[2076]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:31:39.616765 sudo[2076]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:39.623200 sudo[2076]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:39.634575 sudo[2075]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:31:39.635249 sudo[2075]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:39.658654 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:39.662565 auditctl[2079]: No rules Dec 13 01:31:39.663066 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:31:39.663337 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:39.668935 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:39.700337 augenrules[2098]: No rules Dec 13 01:31:39.702261 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:39.704684 sudo[2075]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:39.865944 sshd[2050]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:39.871179 systemd[1]: sshd@5-78.47.218.196:22-147.75.109.163:57692.service: Deactivated successfully. Dec 13 01:31:39.877757 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:31:39.879289 systemd-logind[1605]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:31:39.880592 systemd-logind[1605]: Removed session 6. Dec 13 01:31:40.034199 systemd[1]: Started sshd@6-78.47.218.196:22-147.75.109.163:57700.service - OpenSSH per-connection server daemon (147.75.109.163:57700). Dec 13 01:31:41.037195 sshd[2107]: Accepted publickey for core from 147.75.109.163 port 57700 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:41.039230 sshd[2107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:41.045826 systemd-logind[1605]: New session 7 of user core. Dec 13 01:31:41.054732 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:31:41.565556 sudo[2111]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:31:41.566059 sudo[2111]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:41.868678 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:31:41.869071 (dockerd)[2128]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:31:42.138517 dockerd[2128]: time="2024-12-13T01:31:42.138463314Z" level=info msg="Starting up" Dec 13 01:31:42.233066 dockerd[2128]: time="2024-12-13T01:31:42.232830235Z" level=info msg="Loading containers: start." Dec 13 01:31:42.337471 kernel: Initializing XFRM netlink socket Dec 13 01:31:42.421272 systemd-networkd[1249]: docker0: Link UP Dec 13 01:31:42.442003 dockerd[2128]: time="2024-12-13T01:31:42.441960542Z" level=info msg="Loading containers: done." Dec 13 01:31:42.456164 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1999070229-merged.mount: Deactivated successfully. Dec 13 01:31:42.461134 dockerd[2128]: time="2024-12-13T01:31:42.461082747Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:31:42.461257 dockerd[2128]: time="2024-12-13T01:31:42.461176279Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:31:42.461282 dockerd[2128]: time="2024-12-13T01:31:42.461272335Z" level=info msg="Daemon has completed initialization" Dec 13 01:31:42.489190 dockerd[2128]: time="2024-12-13T01:31:42.489125898Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:31:42.489447 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:31:43.517610 containerd[1627]: time="2024-12-13T01:31:43.517566867Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:31:44.091200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722287943.mount: Deactivated successfully. Dec 13 01:31:45.094658 containerd[1627]: time="2024-12-13T01:31:45.094568762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:45.096007 containerd[1627]: time="2024-12-13T01:31:45.095939634Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139346" Dec 13 01:31:45.096560 containerd[1627]: time="2024-12-13T01:31:45.096209498Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:45.099047 containerd[1627]: time="2024-12-13T01:31:45.099018051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:45.100009 containerd[1627]: time="2024-12-13T01:31:45.099975551Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.582315612s" Dec 13 01:31:45.100053 containerd[1627]: time="2024-12-13T01:31:45.100012952Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:31:45.121543 containerd[1627]: time="2024-12-13T01:31:45.121500535Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:31:46.413348 containerd[1627]: time="2024-12-13T01:31:46.413276231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:46.414302 containerd[1627]: time="2024-12-13T01:31:46.414259559Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217752" Dec 13 01:31:46.415059 containerd[1627]: time="2024-12-13T01:31:46.415005958Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:46.417296 containerd[1627]: time="2024-12-13T01:31:46.417256637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:46.418352 containerd[1627]: time="2024-12-13T01:31:46.417983447Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.296372457s" Dec 13 01:31:46.418352 containerd[1627]: time="2024-12-13T01:31:46.418019165Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:31:46.437425 containerd[1627]: time="2024-12-13T01:31:46.437372027Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:31:47.364050 containerd[1627]: time="2024-12-13T01:31:47.363180858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.364050 containerd[1627]: time="2024-12-13T01:31:47.364007523Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332842" Dec 13 01:31:47.364607 containerd[1627]: time="2024-12-13T01:31:47.364562960Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.366852 containerd[1627]: time="2024-12-13T01:31:47.366814929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.368467 containerd[1627]: time="2024-12-13T01:31:47.367752297Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 930.350563ms" Dec 13 01:31:47.368467 containerd[1627]: time="2024-12-13T01:31:47.367782886Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:31:47.387492 containerd[1627]: time="2024-12-13T01:31:47.387463536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:31:48.284763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3081413296.mount: Deactivated successfully. Dec 13 01:31:48.524700 containerd[1627]: time="2024-12-13T01:31:48.524661251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.525560 containerd[1627]: time="2024-12-13T01:31:48.525429781Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619984" Dec 13 01:31:48.526282 containerd[1627]: time="2024-12-13T01:31:48.526242267Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.528309 containerd[1627]: time="2024-12-13T01:31:48.528265370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.529013 containerd[1627]: time="2024-12-13T01:31:48.528696417Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.141203342s" Dec 13 01:31:48.529013 containerd[1627]: time="2024-12-13T01:31:48.528721925Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:31:48.546749 containerd[1627]: time="2024-12-13T01:31:48.546660722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:31:49.108954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792756424.mount: Deactivated successfully. Dec 13 01:31:49.110973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 13 01:31:49.119647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:49.275635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:49.277285 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:49.324250 kubelet[2383]: E1213 01:31:49.324206 2383 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:49.328262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:49.330648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:49.799653 containerd[1627]: time="2024-12-13T01:31:49.799590087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.800720 containerd[1627]: time="2024-12-13T01:31:49.800500011Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Dec 13 01:31:49.801452 containerd[1627]: time="2024-12-13T01:31:49.801428490Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.803952 containerd[1627]: time="2024-12-13T01:31:49.803915332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.804991 containerd[1627]: time="2024-12-13T01:31:49.804716364Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.258027909s" Dec 13 01:31:49.804991 containerd[1627]: time="2024-12-13T01:31:49.804742905Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:31:49.825581 containerd[1627]: time="2024-12-13T01:31:49.825545026Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:31:50.340630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831003409.mount: Deactivated successfully. Dec 13 01:31:50.347653 containerd[1627]: time="2024-12-13T01:31:50.347479571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:50.349069 containerd[1627]: time="2024-12-13T01:31:50.349001150Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Dec 13 01:31:50.349675 containerd[1627]: time="2024-12-13T01:31:50.349471882Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:50.352506 containerd[1627]: time="2024-12-13T01:31:50.352462717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:50.353480 containerd[1627]: time="2024-12-13T01:31:50.353310720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 527.734203ms" Dec 13 01:31:50.353480 containerd[1627]: time="2024-12-13T01:31:50.353350918Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:31:50.382145 containerd[1627]: time="2024-12-13T01:31:50.382081133Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:31:50.938267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539504613.mount: Deactivated successfully. Dec 13 01:31:52.307501 containerd[1627]: time="2024-12-13T01:31:52.307435179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.308610 containerd[1627]: time="2024-12-13T01:31:52.308562231Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651705" Dec 13 01:31:52.309709 containerd[1627]: time="2024-12-13T01:31:52.309666839Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.312188 containerd[1627]: time="2024-12-13T01:31:52.312150919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.313062 containerd[1627]: time="2024-12-13T01:31:52.312917753Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 1.930802794s" Dec 13 01:31:52.313062 containerd[1627]: time="2024-12-13T01:31:52.312955076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:31:55.530629 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:55.541613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:55.558877 systemd[1]: Reloading requested from client PID 2549 ('systemctl') (unit session-7.scope)... Dec 13 01:31:55.558902 systemd[1]: Reloading... Dec 13 01:31:55.682463 zram_generator::config[2590]: No configuration found. Dec 13 01:31:55.788478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:55.849570 systemd[1]: Reloading finished in 290 ms. Dec 13 01:31:55.892913 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:31:55.893203 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:31:55.893722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:55.899622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:56.041598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:56.047372 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:31:56.099869 kubelet[2655]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:56.099869 kubelet[2655]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:31:56.099869 kubelet[2655]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:56.101680 kubelet[2655]: I1213 01:31:56.101626 2655 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:31:56.318333 kubelet[2655]: I1213 01:31:56.318058 2655 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:31:56.318333 kubelet[2655]: I1213 01:31:56.318128 2655 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:31:56.318665 kubelet[2655]: I1213 01:31:56.318636 2655 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:31:56.345320 kubelet[2655]: E1213 01:31:56.345247 2655 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://78.47.218.196:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:56.345654 kubelet[2655]: I1213 01:31:56.345385 2655 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:31:56.367158 kubelet[2655]: I1213 01:31:56.367129 2655 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:31:56.368214 kubelet[2655]: I1213 01:31:56.368161 2655 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:31:56.369181 kubelet[2655]: I1213 01:31:56.369129 2655 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:31:56.369662 kubelet[2655]: I1213 01:31:56.369619 2655 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:31:56.369662 kubelet[2655]: I1213 01:31:56.369639 2655 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:31:56.369781 kubelet[2655]: I1213 01:31:56.369739 2655 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:56.371475 kubelet[2655]: I1213 01:31:56.369852 2655 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:31:56.371475 kubelet[2655]: I1213 01:31:56.369872 2655 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:31:56.371475 kubelet[2655]: I1213 01:31:56.369903 2655 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:31:56.371475 kubelet[2655]: I1213 01:31:56.369918 2655 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:31:56.371475 kubelet[2655]: W1213 01:31:56.370603 2655 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-82283d2dd3&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:56.371475 kubelet[2655]: E1213 01:31:56.370684 2655 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-82283d2dd3&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:56.371475 kubelet[2655]: I1213 01:31:56.371196 2655 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:31:56.374828 kubelet[2655]: I1213 01:31:56.374794 2655 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:31:56.375912 kubelet[2655]: W1213 01:31:56.375879 2655 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:31:56.376732 kubelet[2655]: I1213 01:31:56.376695 2655 server.go:1256] "Started kubelet" Dec 13 01:31:56.376853 kubelet[2655]: W1213 01:31:56.376783 2655 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:56.376853 kubelet[2655]: E1213 01:31:56.376820 2655 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:56.379549 kubelet[2655]: I1213 01:31:56.379513 2655 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:31:56.380168 kubelet[2655]: I1213 01:31:56.380125 2655 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:31:56.380479 kubelet[2655]: I1213 01:31:56.380393 2655 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:31:56.382313 kubelet[2655]: I1213 01:31:56.382274 2655 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:31:56.388093 kubelet[2655]: I1213 01:31:56.388064 2655 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:31:56.389532 kubelet[2655]: E1213 01:31:56.389461 2655 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.218.196:6443/api/v1/namespaces/default/events\": dial tcp 78.47.218.196:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-e-82283d2dd3.1810987c59262733 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-e-82283d2dd3,UID:ci-4081-2-1-e-82283d2dd3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-e-82283d2dd3,},FirstTimestamp:2024-12-13 01:31:56.376680243 +0000 UTC m=+0.324753965,LastTimestamp:2024-12-13 01:31:56.376680243 +0000 UTC m=+0.324753965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-e-82283d2dd3,}" Dec 13 01:31:56.394508 kubelet[2655]: E1213 01:31:56.394479 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:31:56.394612 kubelet[2655]: I1213 01:31:56.394513 2655 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:31:56.394612 kubelet[2655]: I1213 01:31:56.394599 2655 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:31:56.394701 kubelet[2655]: I1213 01:31:56.394638 2655 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:31:56.394909 kubelet[2655]: W1213 01:31:56.394866 2655 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:56.394909 kubelet[2655]: E1213 01:31:56.394906 2655 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:56.395225 kubelet[2655]: E1213 01:31:56.395200 2655 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-82283d2dd3?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="200ms" Dec 13 01:31:56.395716 kubelet[2655]: E1213 01:31:56.395690 2655 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:31:56.401450 kubelet[2655]: I1213 01:31:56.399652 2655 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:31:56.401450 kubelet[2655]: I1213 01:31:56.399673 2655 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:31:56.401450 kubelet[2655]: I1213 01:31:56.399797 2655 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:31:56.415366 kubelet[2655]: I1213 01:31:56.415340 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:31:56.416541 kubelet[2655]: I1213 01:31:56.416519 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:31:56.416592 kubelet[2655]: I1213 01:31:56.416547 2655 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:31:56.416592 kubelet[2655]: I1213 01:31:56.416564 2655 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:31:56.416644 kubelet[2655]: E1213 01:31:56.416600 2655 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:31:56.424362 kubelet[2655]: W1213 01:31:56.424312 2655 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:56.424445 kubelet[2655]: E1213 01:31:56.424362 2655 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:56.437082 kubelet[2655]: I1213 01:31:56.437054 2655 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:31:56.437186 kubelet[2655]: I1213 01:31:56.437167 2655 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:31:56.437277 kubelet[2655]: I1213 01:31:56.437267 2655 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:56.439132 kubelet[2655]: I1213 01:31:56.439119 2655 policy_none.go:49] "None policy: Start" Dec 13 01:31:56.439657 kubelet[2655]: I1213 01:31:56.439643 2655 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:31:56.439814 kubelet[2655]: I1213 01:31:56.439800 2655 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:31:56.444281 kubelet[2655]: I1213 01:31:56.444267 2655 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:31:56.444559 kubelet[2655]: I1213 01:31:56.444547 2655 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:31:56.450729 kubelet[2655]: E1213 01:31:56.450717 2655 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:31:56.496735 kubelet[2655]: I1213 01:31:56.496697 2655 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.496984 kubelet[2655]: E1213 01:31:56.496959 2655 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.517325 kubelet[2655]: I1213 01:31:56.517274 2655 topology_manager.go:215] "Topology Admit Handler" podUID="28ed054df02f4085c922771a9aaaf5ba" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.519123 kubelet[2655]: I1213 01:31:56.519093 2655 topology_manager.go:215] "Topology Admit Handler" podUID="168efcc5ad359650209cf4142b5f05fb" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.520591 kubelet[2655]: I1213 01:31:56.520540 2655 topology_manager.go:215] "Topology Admit Handler" podUID="4ef92bbe298f9baa9151641ffce940fe" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.595917 kubelet[2655]: E1213 01:31:56.595760 2655 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-82283d2dd3?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="400ms" Dec 13 01:31:56.696750 kubelet[2655]: I1213 01:31:56.696625 2655 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ef92bbe298f9baa9151641ffce940fe-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-e-82283d2dd3\" (UID: \"4ef92bbe298f9baa9151641ffce940fe\") " pod="kube-system/kube-scheduler-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.696956 kubelet[2655]: I1213 01:31:56.696806 2655 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28ed054df02f4085c922771a9aaaf5ba-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-e-82283d2dd3\" (UID: \"28ed054df02f4085c922771a9aaaf5ba\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.696956 kubelet[2655]: I1213 01:31:56.696902 2655 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/168efcc5ad359650209cf4142b5f05fb-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-82283d2dd3\" (UID: \"168efcc5ad359650209cf4142b5f05fb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.696956 kubelet[2655]: I1213 01:31:56.696940 2655 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/168efcc5ad359650209cf4142b5f05fb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-e-82283d2dd3\" (UID: \"168efcc5ad359650209cf4142b5f05fb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.697111 kubelet[2655]: I1213 01:31:56.696970 2655 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/168efcc5ad359650209cf4142b5f05fb-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-e-82283d2dd3\" (UID: \"168efcc5ad359650209cf4142b5f05fb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.697111 kubelet[2655]: I1213 01:31:56.697023 2655 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/168efcc5ad359650209cf4142b5f05fb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-e-82283d2dd3\" (UID: \"168efcc5ad359650209cf4142b5f05fb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.697111 kubelet[2655]: I1213 01:31:56.697071 2655 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28ed054df02f4085c922771a9aaaf5ba-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-e-82283d2dd3\" (UID: \"28ed054df02f4085c922771a9aaaf5ba\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.697111 kubelet[2655]: I1213 01:31:56.697095 2655 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28ed054df02f4085c922771a9aaaf5ba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-e-82283d2dd3\" (UID: \"28ed054df02f4085c922771a9aaaf5ba\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.697111 kubelet[2655]: I1213 01:31:56.697114 2655 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/168efcc5ad359650209cf4142b5f05fb-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-82283d2dd3\" (UID: \"168efcc5ad359650209cf4142b5f05fb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.699993 kubelet[2655]: I1213 01:31:56.699952 2655 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.700578 kubelet[2655]: E1213 01:31:56.700532 2655 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:56.827455 containerd[1627]: time="2024-12-13T01:31:56.827373036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-e-82283d2dd3,Uid:28ed054df02f4085c922771a9aaaf5ba,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:56.832965 containerd[1627]: time="2024-12-13T01:31:56.832484449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-e-82283d2dd3,Uid:168efcc5ad359650209cf4142b5f05fb,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:56.835297 containerd[1627]: time="2024-12-13T01:31:56.835139161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-e-82283d2dd3,Uid:4ef92bbe298f9baa9151641ffce940fe,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:56.997384 kubelet[2655]: E1213 01:31:56.997336 2655 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-82283d2dd3?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="800ms" Dec 13 01:31:57.104182 kubelet[2655]: I1213 01:31:57.103933 2655 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:57.104786 kubelet[2655]: E1213 01:31:57.104485 2655 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:57.334754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3750313841.mount: Deactivated successfully. Dec 13 01:31:57.343457 containerd[1627]: time="2024-12-13T01:31:57.343348866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:57.345908 containerd[1627]: time="2024-12-13T01:31:57.345546102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:57.346945 containerd[1627]: time="2024-12-13T01:31:57.346889609Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:57.348370 containerd[1627]: time="2024-12-13T01:31:57.348320214Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:57.351137 containerd[1627]: time="2024-12-13T01:31:57.350908517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 13 01:31:57.351137 containerd[1627]: time="2024-12-13T01:31:57.350962490Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:57.353495 containerd[1627]: time="2024-12-13T01:31:57.353270111Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:57.354629 containerd[1627]: time="2024-12-13T01:31:57.354585723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:57.361216 containerd[1627]: time="2024-12-13T01:31:57.361170890Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.603753ms" Dec 13 01:31:57.365197 containerd[1627]: time="2024-12-13T01:31:57.365112358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 537.58819ms" Dec 13 01:31:57.368251 containerd[1627]: time="2024-12-13T01:31:57.368199936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 532.960311ms" Dec 13 01:31:57.448988 kubelet[2655]: W1213 01:31:57.448910 2655 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:57.448988 kubelet[2655]: E1213 01:31:57.448966 2655 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:57.480099 containerd[1627]: time="2024-12-13T01:31:57.479892743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:57.480099 containerd[1627]: time="2024-12-13T01:31:57.479941117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:57.480099 containerd[1627]: time="2024-12-13T01:31:57.479954093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:57.480099 containerd[1627]: time="2024-12-13T01:31:57.480026162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:57.481097 containerd[1627]: time="2024-12-13T01:31:57.480771893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:57.481097 containerd[1627]: time="2024-12-13T01:31:57.480820346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:57.481097 containerd[1627]: time="2024-12-13T01:31:57.480836668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:57.481097 containerd[1627]: time="2024-12-13T01:31:57.480921682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:57.484326 containerd[1627]: time="2024-12-13T01:31:57.484083343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:57.484326 containerd[1627]: time="2024-12-13T01:31:57.484120485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:57.484326 containerd[1627]: time="2024-12-13T01:31:57.484133620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:57.484326 containerd[1627]: time="2024-12-13T01:31:57.484195400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:57.510365 kubelet[2655]: W1213 01:31:57.509665 2655 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:57.510365 kubelet[2655]: E1213 01:31:57.509713 2655 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:57.560642 containerd[1627]: time="2024-12-13T01:31:57.560594035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-e-82283d2dd3,Uid:168efcc5ad359650209cf4142b5f05fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"64f6e9a26238080aae9a54675a9b49dfde128ebab6bafbb6f106a9ac0d4468ac\"" Dec 13 01:31:57.567926 containerd[1627]: time="2024-12-13T01:31:57.567848236Z" level=info msg="CreateContainer within sandbox \"64f6e9a26238080aae9a54675a9b49dfde128ebab6bafbb6f106a9ac0d4468ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:31:57.571103 containerd[1627]: time="2024-12-13T01:31:57.570522123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-e-82283d2dd3,Uid:28ed054df02f4085c922771a9aaaf5ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"6631fe9dfc739b804382a78630a12ff10923644a3090884b62d735f038c52ad3\"" Dec 13 01:31:57.574661 containerd[1627]: time="2024-12-13T01:31:57.574489531Z" level=info msg="CreateContainer within sandbox \"6631fe9dfc739b804382a78630a12ff10923644a3090884b62d735f038c52ad3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:31:57.574661 containerd[1627]: time="2024-12-13T01:31:57.574602089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-e-82283d2dd3,Uid:4ef92bbe298f9baa9151641ffce940fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c522e7a5c37f5d7f98880e2a02ef4a0e32c59c9eec1f624619e6ed82816d408\"" Dec 13 01:31:57.577407 containerd[1627]: time="2024-12-13T01:31:57.577380869Z" level=info msg="CreateContainer within sandbox \"6c522e7a5c37f5d7f98880e2a02ef4a0e32c59c9eec1f624619e6ed82816d408\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:31:57.587333 containerd[1627]: time="2024-12-13T01:31:57.586747271Z" level=info msg="CreateContainer within sandbox \"64f6e9a26238080aae9a54675a9b49dfde128ebab6bafbb6f106a9ac0d4468ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b90d6b548d93f1668b582011b7c0be14df37d2f6164bbf33d676c332804f2c49\"" Dec 13 01:31:57.587333 containerd[1627]: time="2024-12-13T01:31:57.587149629Z" level=info msg="StartContainer for \"b90d6b548d93f1668b582011b7c0be14df37d2f6164bbf33d676c332804f2c49\"" Dec 13 01:31:57.589129 containerd[1627]: time="2024-12-13T01:31:57.589104857Z" level=info msg="CreateContainer within sandbox \"6c522e7a5c37f5d7f98880e2a02ef4a0e32c59c9eec1f624619e6ed82816d408\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8ce3956f008f3c5a177f0c817f06d95d0e62cde68b51014f7b8f3f4f10b3be09\"" Dec 13 01:31:57.590240 containerd[1627]: time="2024-12-13T01:31:57.589565939Z" level=info msg="StartContainer for \"8ce3956f008f3c5a177f0c817f06d95d0e62cde68b51014f7b8f3f4f10b3be09\"" Dec 13 01:31:57.595166 containerd[1627]: time="2024-12-13T01:31:57.595140783Z" level=info msg="CreateContainer within sandbox \"6631fe9dfc739b804382a78630a12ff10923644a3090884b62d735f038c52ad3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a2bed5c1ef6990497b6233b787c8918145621954292cee0da2ff98da721c9bed\"" Dec 13 01:31:57.595508 containerd[1627]: time="2024-12-13T01:31:57.595405545Z" level=info msg="StartContainer for \"a2bed5c1ef6990497b6233b787c8918145621954292cee0da2ff98da721c9bed\"" Dec 13 01:31:57.670111 containerd[1627]: time="2024-12-13T01:31:57.670006637Z" level=info msg="StartContainer for \"a2bed5c1ef6990497b6233b787c8918145621954292cee0da2ff98da721c9bed\" returns successfully" Dec 13 01:31:57.696314 containerd[1627]: time="2024-12-13T01:31:57.696153411Z" level=info msg="StartContainer for \"b90d6b548d93f1668b582011b7c0be14df37d2f6164bbf33d676c332804f2c49\" returns successfully" Dec 13 01:31:57.700208 containerd[1627]: time="2024-12-13T01:31:57.700184631Z" level=info msg="StartContainer for \"8ce3956f008f3c5a177f0c817f06d95d0e62cde68b51014f7b8f3f4f10b3be09\" returns successfully" Dec 13 01:31:57.793932 kubelet[2655]: W1213 01:31:57.793840 2655 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-82283d2dd3&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:57.793932 kubelet[2655]: E1213 01:31:57.793890 2655 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-82283d2dd3&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:57.798371 kubelet[2655]: E1213 01:31:57.798331 2655 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-82283d2dd3?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="1.6s" Dec 13 01:31:57.896310 kubelet[2655]: E1213 01:31:57.896283 2655 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.218.196:6443/api/v1/namespaces/default/events\": dial tcp 78.47.218.196:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-e-82283d2dd3.1810987c59262733 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-e-82283d2dd3,UID:ci-4081-2-1-e-82283d2dd3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-e-82283d2dd3,},FirstTimestamp:2024-12-13 01:31:56.376680243 +0000 UTC m=+0.324753965,LastTimestamp:2024-12-13 01:31:56.376680243 +0000 UTC m=+0.324753965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-e-82283d2dd3,}" Dec 13 01:31:57.908824 kubelet[2655]: I1213 01:31:57.908546 2655 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:57.908824 kubelet[2655]: E1213 01:31:57.908760 2655 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:57.912716 kubelet[2655]: W1213 01:31:57.912700 2655 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:57.913021 kubelet[2655]: E1213 01:31:57.912836 2655 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 01:31:59.404736 kubelet[2655]: E1213 01:31:59.404661 2655 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-e-82283d2dd3\" not found" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:59.463633 kubelet[2655]: E1213 01:31:59.463538 2655 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-2-1-e-82283d2dd3" not found Dec 13 01:31:59.513033 kubelet[2655]: I1213 01:31:59.512582 2655 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:59.529295 kubelet[2655]: I1213 01:31:59.529245 2655 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:31:59.555446 kubelet[2655]: E1213 01:31:59.554406 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:31:59.654983 kubelet[2655]: E1213 01:31:59.654804 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:31:59.755644 kubelet[2655]: E1213 01:31:59.755581 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:31:59.856483 kubelet[2655]: E1213 01:31:59.856400 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:31:59.957271 kubelet[2655]: E1213 01:31:59.957084 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:32:00.058310 kubelet[2655]: E1213 01:32:00.058221 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:32:00.159007 kubelet[2655]: E1213 01:32:00.158921 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:32:00.259979 kubelet[2655]: E1213 01:32:00.259808 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:32:00.360878 kubelet[2655]: E1213 01:32:00.360806 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:32:00.460980 kubelet[2655]: E1213 01:32:00.460937 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:32:00.562055 kubelet[2655]: E1213 01:32:00.561753 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:32:00.662161 kubelet[2655]: E1213 01:32:00.662127 2655 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-e-82283d2dd3\" not found" Dec 13 01:32:01.374284 kubelet[2655]: I1213 01:32:01.374230 2655 apiserver.go:52] "Watching apiserver" Dec 13 01:32:01.395701 kubelet[2655]: I1213 01:32:01.395672 2655 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:32:02.156745 systemd[1]: Reloading requested from client PID 2929 ('systemctl') (unit session-7.scope)... Dec 13 01:32:02.156760 systemd[1]: Reloading... Dec 13 01:32:02.225443 zram_generator::config[2969]: No configuration found. Dec 13 01:32:02.328201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:02.393270 systemd[1]: Reloading finished in 236 ms. Dec 13 01:32:02.427546 kubelet[2655]: I1213 01:32:02.427231 2655 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:02.427794 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:02.446474 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:32:02.446887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:02.452658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:02.581525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:02.581746 (kubelet)[3030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:02.630722 kubelet[3030]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:02.631251 kubelet[3030]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:02.632073 kubelet[3030]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:02.632073 kubelet[3030]: I1213 01:32:02.631433 3030 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:02.639124 kubelet[3030]: I1213 01:32:02.639100 3030 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:32:02.639124 kubelet[3030]: I1213 01:32:02.639119 3030 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:02.639310 kubelet[3030]: I1213 01:32:02.639274 3030 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:32:02.640803 kubelet[3030]: I1213 01:32:02.640777 3030 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:32:02.642338 kubelet[3030]: I1213 01:32:02.642256 3030 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:02.648503 kubelet[3030]: I1213 01:32:02.648490 3030 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:02.649315 kubelet[3030]: I1213 01:32:02.649027 3030 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:02.649315 kubelet[3030]: I1213 01:32:02.649149 3030 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:02.649315 kubelet[3030]: I1213 01:32:02.649169 3030 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:02.649315 kubelet[3030]: I1213 01:32:02.649178 3030 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:02.649315 kubelet[3030]: I1213 01:32:02.649207 3030 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:02.650807 kubelet[3030]: I1213 01:32:02.650758 3030 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:32:02.650807 kubelet[3030]: I1213 01:32:02.650777 3030 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:02.651363 kubelet[3030]: I1213 01:32:02.651224 3030 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:02.651363 kubelet[3030]: I1213 01:32:02.651243 3030 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:02.656539 kubelet[3030]: I1213 01:32:02.656526 3030 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:02.656795 kubelet[3030]: I1213 01:32:02.656735 3030 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:02.667060 kubelet[3030]: I1213 01:32:02.666410 3030 server.go:1256] "Started kubelet" Dec 13 01:32:02.668879 kubelet[3030]: I1213 01:32:02.667987 3030 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:02.668879 kubelet[3030]: I1213 01:32:02.668205 3030 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:02.670515 kubelet[3030]: I1213 01:32:02.670500 3030 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:02.671265 kubelet[3030]: I1213 01:32:02.670968 3030 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:02.672277 kubelet[3030]: I1213 01:32:02.671589 3030 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:32:02.676682 kubelet[3030]: I1213 01:32:02.675222 3030 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:02.676682 kubelet[3030]: I1213 01:32:02.675316 3030 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:32:02.676682 kubelet[3030]: I1213 01:32:02.675476 3030 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:32:02.680505 kubelet[3030]: I1213 01:32:02.678015 3030 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:02.680063 sudo[3043]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:32:02.680383 sudo[3043]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:32:02.682563 kubelet[3030]: I1213 01:32:02.680971 3030 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:02.683044 kubelet[3030]: I1213 01:32:02.683007 3030 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:02.686610 kubelet[3030]: E1213 01:32:02.686583 3030 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:32:02.690112 kubelet[3030]: I1213 01:32:02.690093 3030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:02.692500 kubelet[3030]: I1213 01:32:02.692314 3030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:02.692500 kubelet[3030]: I1213 01:32:02.692337 3030 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:02.692500 kubelet[3030]: I1213 01:32:02.692349 3030 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:32:02.692500 kubelet[3030]: E1213 01:32:02.692383 3030 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:02.757715 kubelet[3030]: I1213 01:32:02.757670 3030 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:02.757844 kubelet[3030]: I1213 01:32:02.757834 3030 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:02.758057 kubelet[3030]: I1213 01:32:02.757893 3030 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:02.758057 kubelet[3030]: I1213 01:32:02.758002 3030 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:32:02.758057 kubelet[3030]: I1213 01:32:02.758019 3030 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:32:02.758057 kubelet[3030]: I1213 01:32:02.758026 3030 policy_none.go:49] "None policy: Start" Dec 13 01:32:02.758717 kubelet[3030]: I1213 01:32:02.758705 3030 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:02.758911 kubelet[3030]: I1213 01:32:02.758811 3030 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:02.759121 kubelet[3030]: I1213 01:32:02.759110 3030 state_mem.go:75] "Updated machine memory state" Dec 13 01:32:02.760456 kubelet[3030]: I1213 01:32:02.760376 3030 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:02.764478 kubelet[3030]: I1213 01:32:02.764364 3030 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:02.793368 kubelet[3030]: I1213 01:32:02.793353 3030 topology_manager.go:215] "Topology Admit Handler" podUID="28ed054df02f4085c922771a9aaaf5ba" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.794105 kubelet[3030]: I1213 01:32:02.793500 3030 topology_manager.go:215] "Topology Admit Handler" podUID="168efcc5ad359650209cf4142b5f05fb" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.794105 kubelet[3030]: I1213 01:32:02.793531 3030 topology_manager.go:215] "Topology Admit Handler" podUID="4ef92bbe298f9baa9151641ffce940fe" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.870431 kubelet[3030]: I1213 01:32:02.869813 3030 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.881684 kubelet[3030]: I1213 01:32:02.881653 3030 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.881763 kubelet[3030]: I1213 01:32:02.881708 3030 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.978150 kubelet[3030]: I1213 01:32:02.978058 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ef92bbe298f9baa9151641ffce940fe-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-e-82283d2dd3\" (UID: \"4ef92bbe298f9baa9151641ffce940fe\") " pod="kube-system/kube-scheduler-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.978838 kubelet[3030]: I1213 01:32:02.978454 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28ed054df02f4085c922771a9aaaf5ba-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-e-82283d2dd3\" (UID: \"28ed054df02f4085c922771a9aaaf5ba\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.978838 kubelet[3030]: I1213 01:32:02.978587 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/168efcc5ad359650209cf4142b5f05fb-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-e-82283d2dd3\" (UID: \"168efcc5ad359650209cf4142b5f05fb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.978838 kubelet[3030]: I1213 01:32:02.978610 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/168efcc5ad359650209cf4142b5f05fb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-e-82283d2dd3\" (UID: \"168efcc5ad359650209cf4142b5f05fb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.978838 kubelet[3030]: I1213 01:32:02.978686 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/168efcc5ad359650209cf4142b5f05fb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-e-82283d2dd3\" (UID: \"168efcc5ad359650209cf4142b5f05fb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.978838 kubelet[3030]: I1213 01:32:02.978709 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/168efcc5ad359650209cf4142b5f05fb-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-82283d2dd3\" (UID: \"168efcc5ad359650209cf4142b5f05fb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.978991 kubelet[3030]: I1213 01:32:02.978725 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28ed054df02f4085c922771a9aaaf5ba-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-e-82283d2dd3\" (UID: \"28ed054df02f4085c922771a9aaaf5ba\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.978991 kubelet[3030]: I1213 01:32:02.978861 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28ed054df02f4085c922771a9aaaf5ba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-e-82283d2dd3\" (UID: \"28ed054df02f4085c922771a9aaaf5ba\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:02.978991 kubelet[3030]: I1213 01:32:02.978881 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/168efcc5ad359650209cf4142b5f05fb-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-82283d2dd3\" (UID: \"168efcc5ad359650209cf4142b5f05fb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:03.214376 sudo[3043]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:03.655339 kubelet[3030]: I1213 01:32:03.655297 3030 apiserver.go:52] "Watching apiserver" Dec 13 01:32:03.676475 kubelet[3030]: I1213 01:32:03.676443 3030 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:32:03.728618 kubelet[3030]: E1213 01:32:03.728122 3030 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-e-82283d2dd3\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-e-82283d2dd3" Dec 13 01:32:03.758804 kubelet[3030]: I1213 01:32:03.758594 3030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-e-82283d2dd3" podStartSLOduration=1.7584987380000001 podStartE2EDuration="1.758498738s" podCreationTimestamp="2024-12-13 01:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:03.750128589 +0000 UTC m=+1.162997583" watchObservedRunningTime="2024-12-13 01:32:03.758498738 +0000 UTC m=+1.171367721" Dec 13 01:32:03.758804 kubelet[3030]: I1213 01:32:03.758755 3030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-e-82283d2dd3" podStartSLOduration=1.758736877 podStartE2EDuration="1.758736877s" podCreationTimestamp="2024-12-13 01:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:03.757456877 +0000 UTC m=+1.170325861" watchObservedRunningTime="2024-12-13 01:32:03.758736877 +0000 UTC m=+1.171605861" Dec 13 01:32:03.778843 kubelet[3030]: I1213 01:32:03.778779 3030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-e-82283d2dd3" podStartSLOduration=1.7787511010000001 podStartE2EDuration="1.778751101s" podCreationTimestamp="2024-12-13 01:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:03.766234222 +0000 UTC m=+1.179103205" watchObservedRunningTime="2024-12-13 01:32:03.778751101 +0000 UTC m=+1.191620085" Dec 13 01:32:04.464858 sudo[2111]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:04.625865 sshd[2107]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:04.629076 systemd[1]: sshd@6-78.47.218.196:22-147.75.109.163:57700.service: Deactivated successfully. Dec 13 01:32:04.635384 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:32:04.636546 systemd-logind[1605]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:32:04.637649 systemd-logind[1605]: Removed session 7. Dec 13 01:32:16.879615 kubelet[3030]: I1213 01:32:16.879533 3030 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:32:16.880033 containerd[1627]: time="2024-12-13T01:32:16.879834043Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:32:16.880293 kubelet[3030]: I1213 01:32:16.880062 3030 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:32:16.904048 kubelet[3030]: I1213 01:32:16.903898 3030 topology_manager.go:215] "Topology Admit Handler" podUID="640e382c-db84-4ad9-94c4-fe1ab068573d" podNamespace="kube-system" podName="cilium-operator-5cc964979-w9tb6" Dec 13 01:32:17.061977 kubelet[3030]: I1213 01:32:17.061900 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/640e382c-db84-4ad9-94c4-fe1ab068573d-cilium-config-path\") pod \"cilium-operator-5cc964979-w9tb6\" (UID: \"640e382c-db84-4ad9-94c4-fe1ab068573d\") " pod="kube-system/cilium-operator-5cc964979-w9tb6" Dec 13 01:32:17.061977 kubelet[3030]: I1213 01:32:17.061947 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shwwx\" (UniqueName: \"kubernetes.io/projected/640e382c-db84-4ad9-94c4-fe1ab068573d-kube-api-access-shwwx\") pod \"cilium-operator-5cc964979-w9tb6\" (UID: \"640e382c-db84-4ad9-94c4-fe1ab068573d\") " pod="kube-system/cilium-operator-5cc964979-w9tb6" Dec 13 01:32:17.176695 kubelet[3030]: E1213 01:32:17.176062 3030 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:32:17.176695 kubelet[3030]: E1213 01:32:17.176096 3030 projected.go:200] Error preparing data for projected volume kube-api-access-shwwx for pod kube-system/cilium-operator-5cc964979-w9tb6: configmap "kube-root-ca.crt" not found Dec 13 01:32:17.176695 kubelet[3030]: E1213 01:32:17.176188 3030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/640e382c-db84-4ad9-94c4-fe1ab068573d-kube-api-access-shwwx podName:640e382c-db84-4ad9-94c4-fe1ab068573d nodeName:}" failed. No retries permitted until 2024-12-13 01:32:17.676167687 +0000 UTC m=+15.089036681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-shwwx" (UniqueName: "kubernetes.io/projected/640e382c-db84-4ad9-94c4-fe1ab068573d-kube-api-access-shwwx") pod "cilium-operator-5cc964979-w9tb6" (UID: "640e382c-db84-4ad9-94c4-fe1ab068573d") : configmap "kube-root-ca.crt" not found Dec 13 01:32:17.260987 kubelet[3030]: I1213 01:32:17.260901 3030 topology_manager.go:215] "Topology Admit Handler" podUID="1a7352c2-33a0-4d60-8314-5b2e610c838f" podNamespace="kube-system" podName="cilium-cskzn" Dec 13 01:32:17.264624 kubelet[3030]: I1213 01:32:17.264590 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-lib-modules\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.265056 kubelet[3030]: I1213 01:32:17.264644 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ljrh\" (UniqueName: \"kubernetes.io/projected/1a7352c2-33a0-4d60-8314-5b2e610c838f-kube-api-access-4ljrh\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.265056 kubelet[3030]: I1213 01:32:17.264677 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cni-path\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.265056 kubelet[3030]: I1213 01:32:17.264710 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-bpf-maps\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.265056 kubelet[3030]: I1213 01:32:17.264746 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-cgroup\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.265056 kubelet[3030]: I1213 01:32:17.264776 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a7352c2-33a0-4d60-8314-5b2e610c838f-clustermesh-secrets\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.265056 kubelet[3030]: I1213 01:32:17.264801 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-xtables-lock\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.266209 kubelet[3030]: I1213 01:32:17.264830 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-host-proc-sys-net\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.266209 kubelet[3030]: I1213 01:32:17.264874 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-config-path\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.266209 kubelet[3030]: I1213 01:32:17.264897 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-run\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.266209 kubelet[3030]: I1213 01:32:17.264922 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-hostproc\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.266209 kubelet[3030]: I1213 01:32:17.264944 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-etc-cni-netd\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.266209 kubelet[3030]: I1213 01:32:17.264980 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-host-proc-sys-kernel\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.268678 kubelet[3030]: I1213 01:32:17.265005 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a7352c2-33a0-4d60-8314-5b2e610c838f-hubble-tls\") pod \"cilium-cskzn\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " pod="kube-system/cilium-cskzn" Dec 13 01:32:17.275121 kubelet[3030]: I1213 01:32:17.275080 3030 topology_manager.go:215] "Topology Admit Handler" podUID="cbd247fc-4760-4e1e-af0c-e505dc7338f8" podNamespace="kube-system" podName="kube-proxy-c4zd9" Dec 13 01:32:17.365972 kubelet[3030]: I1213 01:32:17.365779 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cbd247fc-4760-4e1e-af0c-e505dc7338f8-kube-proxy\") pod \"kube-proxy-c4zd9\" (UID: \"cbd247fc-4760-4e1e-af0c-e505dc7338f8\") " pod="kube-system/kube-proxy-c4zd9" Dec 13 01:32:17.365972 kubelet[3030]: I1213 01:32:17.365858 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cbd247fc-4760-4e1e-af0c-e505dc7338f8-xtables-lock\") pod \"kube-proxy-c4zd9\" (UID: \"cbd247fc-4760-4e1e-af0c-e505dc7338f8\") " pod="kube-system/kube-proxy-c4zd9" Dec 13 01:32:17.365972 kubelet[3030]: I1213 01:32:17.365895 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbd247fc-4760-4e1e-af0c-e505dc7338f8-lib-modules\") pod \"kube-proxy-c4zd9\" (UID: \"cbd247fc-4760-4e1e-af0c-e505dc7338f8\") " pod="kube-system/kube-proxy-c4zd9" Dec 13 01:32:17.365972 kubelet[3030]: I1213 01:32:17.365950 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26pxh\" (UniqueName: \"kubernetes.io/projected/cbd247fc-4760-4e1e-af0c-e505dc7338f8-kube-api-access-26pxh\") pod \"kube-proxy-c4zd9\" (UID: \"cbd247fc-4760-4e1e-af0c-e505dc7338f8\") " pod="kube-system/kube-proxy-c4zd9" Dec 13 01:32:17.376616 kubelet[3030]: E1213 01:32:17.374561 3030 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:32:17.376616 kubelet[3030]: E1213 01:32:17.374586 3030 projected.go:200] Error preparing data for projected volume kube-api-access-4ljrh for pod kube-system/cilium-cskzn: configmap "kube-root-ca.crt" not found Dec 13 01:32:17.376616 kubelet[3030]: E1213 01:32:17.374646 3030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a7352c2-33a0-4d60-8314-5b2e610c838f-kube-api-access-4ljrh podName:1a7352c2-33a0-4d60-8314-5b2e610c838f nodeName:}" failed. No retries permitted until 2024-12-13 01:32:17.874633386 +0000 UTC m=+15.287502370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4ljrh" (UniqueName: "kubernetes.io/projected/1a7352c2-33a0-4d60-8314-5b2e610c838f-kube-api-access-4ljrh") pod "cilium-cskzn" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f") : configmap "kube-root-ca.crt" not found Dec 13 01:32:17.587754 containerd[1627]: time="2024-12-13T01:32:17.587592384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c4zd9,Uid:cbd247fc-4760-4e1e-af0c-e505dc7338f8,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:17.640247 containerd[1627]: time="2024-12-13T01:32:17.640062933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:17.640247 containerd[1627]: time="2024-12-13T01:32:17.640159509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:17.640247 containerd[1627]: time="2024-12-13T01:32:17.640182494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:17.641090 containerd[1627]: time="2024-12-13T01:32:17.640461499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:17.696725 containerd[1627]: time="2024-12-13T01:32:17.696694077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c4zd9,Uid:cbd247fc-4760-4e1e-af0c-e505dc7338f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a36e696de3504e8d4513a67b6e5518040bd11b2c7af9538c09da63a011a098fa\"" Dec 13 01:32:17.699765 containerd[1627]: time="2024-12-13T01:32:17.699743352Z" level=info msg="CreateContainer within sandbox \"a36e696de3504e8d4513a67b6e5518040bd11b2c7af9538c09da63a011a098fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:32:17.711905 containerd[1627]: time="2024-12-13T01:32:17.711883534Z" level=info msg="CreateContainer within sandbox \"a36e696de3504e8d4513a67b6e5518040bd11b2c7af9538c09da63a011a098fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4787567a749c04f67e90be970c482763851f543a42f16b45630defcbb5d4d173\"" Dec 13 01:32:17.712502 containerd[1627]: time="2024-12-13T01:32:17.712484921Z" level=info msg="StartContainer for \"4787567a749c04f67e90be970c482763851f543a42f16b45630defcbb5d4d173\"" Dec 13 01:32:17.768731 containerd[1627]: time="2024-12-13T01:32:17.768595904Z" level=info msg="StartContainer for \"4787567a749c04f67e90be970c482763851f543a42f16b45630defcbb5d4d173\" returns successfully" Dec 13 01:32:17.814745 containerd[1627]: time="2024-12-13T01:32:17.814694075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-w9tb6,Uid:640e382c-db84-4ad9-94c4-fe1ab068573d,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:17.835008 containerd[1627]: time="2024-12-13T01:32:17.834923608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:17.835752 containerd[1627]: time="2024-12-13T01:32:17.835542729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:17.835752 containerd[1627]: time="2024-12-13T01:32:17.835634505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:17.835996 containerd[1627]: time="2024-12-13T01:32:17.835777581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:17.885760 containerd[1627]: time="2024-12-13T01:32:17.885723394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-w9tb6,Uid:640e382c-db84-4ad9-94c4-fe1ab068573d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\"" Dec 13 01:32:17.888792 containerd[1627]: time="2024-12-13T01:32:17.888326682Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:32:18.180395 containerd[1627]: time="2024-12-13T01:32:18.180164028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cskzn,Uid:1a7352c2-33a0-4d60-8314-5b2e610c838f,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:18.227860 containerd[1627]: time="2024-12-13T01:32:18.227775676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:18.228228 containerd[1627]: time="2024-12-13T01:32:18.228088218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:18.228228 containerd[1627]: time="2024-12-13T01:32:18.228154786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:18.228529 containerd[1627]: time="2024-12-13T01:32:18.228392914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:18.273936 containerd[1627]: time="2024-12-13T01:32:18.273878803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cskzn,Uid:1a7352c2-33a0-4d60-8314-5b2e610c838f,Namespace:kube-system,Attempt:0,} returns sandbox id \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\"" Dec 13 01:32:18.775017 kubelet[3030]: I1213 01:32:18.774169 3030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c4zd9" podStartSLOduration=1.774103781 podStartE2EDuration="1.774103781s" podCreationTimestamp="2024-12-13 01:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:18.773895671 +0000 UTC m=+16.186764695" watchObservedRunningTime="2024-12-13 01:32:18.774103781 +0000 UTC m=+16.186972816" Dec 13 01:32:22.588604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4043313246.mount: Deactivated successfully. Dec 13 01:32:22.997725 containerd[1627]: time="2024-12-13T01:32:22.997676150Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:22.998722 containerd[1627]: time="2024-12-13T01:32:22.998582272Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907221" Dec 13 01:32:22.999491 containerd[1627]: time="2024-12-13T01:32:22.999460700Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:23.001319 containerd[1627]: time="2024-12-13T01:32:23.000911028Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.112513809s" Dec 13 01:32:23.001319 containerd[1627]: time="2024-12-13T01:32:23.000940494Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:32:23.003979 containerd[1627]: time="2024-12-13T01:32:23.003945861Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:32:23.007437 containerd[1627]: time="2024-12-13T01:32:23.007385511Z" level=info msg="CreateContainer within sandbox \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:32:23.026312 containerd[1627]: time="2024-12-13T01:32:23.026260935Z" level=info msg="CreateContainer within sandbox \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\"" Dec 13 01:32:23.028993 containerd[1627]: time="2024-12-13T01:32:23.028921157Z" level=info msg="StartContainer for \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\"" Dec 13 01:32:23.082391 containerd[1627]: time="2024-12-13T01:32:23.082360936Z" level=info msg="StartContainer for \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\" returns successfully" Dec 13 01:32:23.803147 kubelet[3030]: I1213 01:32:23.802903 3030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-w9tb6" podStartSLOduration=2.686517855 podStartE2EDuration="7.802846719s" podCreationTimestamp="2024-12-13 01:32:16 +0000 UTC" firstStartedPulling="2024-12-13 01:32:17.887401402 +0000 UTC m=+15.300270386" lastFinishedPulling="2024-12-13 01:32:23.003730237 +0000 UTC m=+20.416599250" observedRunningTime="2024-12-13 01:32:23.802517546 +0000 UTC m=+21.215386531" watchObservedRunningTime="2024-12-13 01:32:23.802846719 +0000 UTC m=+21.215715703" Dec 13 01:32:26.747088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115144973.mount: Deactivated successfully. Dec 13 01:32:28.261801 containerd[1627]: time="2024-12-13T01:32:28.261752515Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:28.263364 containerd[1627]: time="2024-12-13T01:32:28.263294266Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735343" Dec 13 01:32:28.263935 containerd[1627]: time="2024-12-13T01:32:28.263903706Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:28.265505 containerd[1627]: time="2024-12-13T01:32:28.264934006Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.260951054s" Dec 13 01:32:28.265505 containerd[1627]: time="2024-12-13T01:32:28.264963342Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:32:28.266813 containerd[1627]: time="2024-12-13T01:32:28.266780752Z" level=info msg="CreateContainer within sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:32:28.334796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028057785.mount: Deactivated successfully. Dec 13 01:32:28.340119 containerd[1627]: time="2024-12-13T01:32:28.339976716Z" level=info msg="CreateContainer within sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc\"" Dec 13 01:32:28.340884 containerd[1627]: time="2024-12-13T01:32:28.340850314Z" level=info msg="StartContainer for \"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc\"" Dec 13 01:32:28.422641 containerd[1627]: time="2024-12-13T01:32:28.422597029Z" level=info msg="StartContainer for \"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc\" returns successfully" Dec 13 01:32:28.529943 containerd[1627]: time="2024-12-13T01:32:28.511871052Z" level=info msg="shim disconnected" id=198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc namespace=k8s.io Dec 13 01:32:28.529943 containerd[1627]: time="2024-12-13T01:32:28.529863466Z" level=warning msg="cleaning up after shim disconnected" id=198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc namespace=k8s.io Dec 13 01:32:28.529943 containerd[1627]: time="2024-12-13T01:32:28.529877171Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:28.800342 containerd[1627]: time="2024-12-13T01:32:28.799977755Z" level=info msg="CreateContainer within sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:32:28.821575 containerd[1627]: time="2024-12-13T01:32:28.821517121Z" level=info msg="CreateContainer within sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00\"" Dec 13 01:32:28.822358 containerd[1627]: time="2024-12-13T01:32:28.822205924Z" level=info msg="StartContainer for \"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00\"" Dec 13 01:32:28.878099 containerd[1627]: time="2024-12-13T01:32:28.878040284Z" level=info msg="StartContainer for \"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00\" returns successfully" Dec 13 01:32:28.890639 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:32:28.890904 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:28.890969 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:32:28.899451 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:32:28.916962 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:28.924739 containerd[1627]: time="2024-12-13T01:32:28.924666631Z" level=info msg="shim disconnected" id=79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00 namespace=k8s.io Dec 13 01:32:28.924739 containerd[1627]: time="2024-12-13T01:32:28.924715945Z" level=warning msg="cleaning up after shim disconnected" id=79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00 namespace=k8s.io Dec 13 01:32:28.924739 containerd[1627]: time="2024-12-13T01:32:28.924724231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:29.320260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc-rootfs.mount: Deactivated successfully. Dec 13 01:32:29.808966 containerd[1627]: time="2024-12-13T01:32:29.808320014Z" level=info msg="CreateContainer within sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:32:29.860745 containerd[1627]: time="2024-12-13T01:32:29.860574579Z" level=info msg="CreateContainer within sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7\"" Dec 13 01:32:29.862782 containerd[1627]: time="2024-12-13T01:32:29.861825581Z" level=info msg="StartContainer for \"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7\"" Dec 13 01:32:29.933678 containerd[1627]: time="2024-12-13T01:32:29.933628764Z" level=info msg="StartContainer for \"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7\" returns successfully" Dec 13 01:32:29.960712 containerd[1627]: time="2024-12-13T01:32:29.960658498Z" level=info msg="shim disconnected" id=47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7 namespace=k8s.io Dec 13 01:32:29.960712 containerd[1627]: time="2024-12-13T01:32:29.960705529Z" level=warning msg="cleaning up after shim disconnected" id=47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7 namespace=k8s.io Dec 13 01:32:29.960712 containerd[1627]: time="2024-12-13T01:32:29.960715328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:30.319589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7-rootfs.mount: Deactivated successfully. Dec 13 01:32:30.812107 containerd[1627]: time="2024-12-13T01:32:30.811849981Z" level=info msg="CreateContainer within sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:32:30.843065 containerd[1627]: time="2024-12-13T01:32:30.841330866Z" level=info msg="CreateContainer within sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e\"" Dec 13 01:32:30.848475 containerd[1627]: time="2024-12-13T01:32:30.847646663Z" level=info msg="StartContainer for \"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e\"" Dec 13 01:32:30.900397 containerd[1627]: time="2024-12-13T01:32:30.898089146Z" level=info msg="StartContainer for \"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e\" returns successfully" Dec 13 01:32:30.919395 containerd[1627]: time="2024-12-13T01:32:30.919319920Z" level=info msg="shim disconnected" id=4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e namespace=k8s.io Dec 13 01:32:30.919395 containerd[1627]: time="2024-12-13T01:32:30.919391878Z" level=warning msg="cleaning up after shim disconnected" id=4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e namespace=k8s.io Dec 13 01:32:30.919650 containerd[1627]: time="2024-12-13T01:32:30.919404011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:30.931699 containerd[1627]: time="2024-12-13T01:32:30.931661888Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:32:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:32:31.318980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e-rootfs.mount: Deactivated successfully. Dec 13 01:32:31.820507 containerd[1627]: time="2024-12-13T01:32:31.820221865Z" level=info msg="CreateContainer within sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:32:31.849552 containerd[1627]: time="2024-12-13T01:32:31.849411185Z" level=info msg="CreateContainer within sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\"" Dec 13 01:32:31.850649 containerd[1627]: time="2024-12-13T01:32:31.850597844Z" level=info msg="StartContainer for \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\"" Dec 13 01:32:31.922716 containerd[1627]: time="2024-12-13T01:32:31.922672236Z" level=info msg="StartContainer for \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\" returns successfully" Dec 13 01:32:32.105983 kubelet[3030]: I1213 01:32:32.105878 3030 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:32:32.136764 kubelet[3030]: I1213 01:32:32.134669 3030 topology_manager.go:215] "Topology Admit Handler" podUID="36202efe-586d-46c6-abee-c24a81da3baa" podNamespace="kube-system" podName="coredns-76f75df574-s9fjr" Dec 13 01:32:32.138448 kubelet[3030]: I1213 01:32:32.138255 3030 topology_manager.go:215] "Topology Admit Handler" podUID="74750cff-eecd-4a69-86d4-bc8f2208d6ad" podNamespace="kube-system" podName="coredns-76f75df574-nzfsr" Dec 13 01:32:32.174590 kubelet[3030]: I1213 01:32:32.174547 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36202efe-586d-46c6-abee-c24a81da3baa-config-volume\") pod \"coredns-76f75df574-s9fjr\" (UID: \"36202efe-586d-46c6-abee-c24a81da3baa\") " pod="kube-system/coredns-76f75df574-s9fjr" Dec 13 01:32:32.175303 kubelet[3030]: I1213 01:32:32.174590 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7gkv\" (UniqueName: \"kubernetes.io/projected/36202efe-586d-46c6-abee-c24a81da3baa-kube-api-access-b7gkv\") pod \"coredns-76f75df574-s9fjr\" (UID: \"36202efe-586d-46c6-abee-c24a81da3baa\") " pod="kube-system/coredns-76f75df574-s9fjr" Dec 13 01:32:32.175303 kubelet[3030]: I1213 01:32:32.174619 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74750cff-eecd-4a69-86d4-bc8f2208d6ad-config-volume\") pod \"coredns-76f75df574-nzfsr\" (UID: \"74750cff-eecd-4a69-86d4-bc8f2208d6ad\") " pod="kube-system/coredns-76f75df574-nzfsr" Dec 13 01:32:32.175303 kubelet[3030]: I1213 01:32:32.174636 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwnw6\" (UniqueName: \"kubernetes.io/projected/74750cff-eecd-4a69-86d4-bc8f2208d6ad-kube-api-access-xwnw6\") pod \"coredns-76f75df574-nzfsr\" (UID: \"74750cff-eecd-4a69-86d4-bc8f2208d6ad\") " pod="kube-system/coredns-76f75df574-nzfsr" Dec 13 01:32:32.322962 systemd[1]: run-containerd-runc-k8s.io-d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976-runc.g0HjFJ.mount: Deactivated successfully. Dec 13 01:32:32.452135 containerd[1627]: time="2024-12-13T01:32:32.451917565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s9fjr,Uid:36202efe-586d-46c6-abee-c24a81da3baa,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:32.452292 containerd[1627]: time="2024-12-13T01:32:32.452230135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nzfsr,Uid:74750cff-eecd-4a69-86d4-bc8f2208d6ad,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:34.093260 systemd-networkd[1249]: cilium_host: Link UP Dec 13 01:32:34.093841 systemd-networkd[1249]: cilium_net: Link UP Dec 13 01:32:34.095950 systemd-networkd[1249]: cilium_net: Gained carrier Dec 13 01:32:34.098468 systemd-networkd[1249]: cilium_host: Gained carrier Dec 13 01:32:34.171485 systemd-networkd[1249]: cilium_net: Gained IPv6LL Dec 13 01:32:34.188620 systemd-networkd[1249]: cilium_vxlan: Link UP Dec 13 01:32:34.188627 systemd-networkd[1249]: cilium_vxlan: Gained carrier Dec 13 01:32:34.450539 kernel: NET: Registered PF_ALG protocol family Dec 13 01:32:34.980761 systemd-networkd[1249]: cilium_host: Gained IPv6LL Dec 13 01:32:35.048852 systemd-networkd[1249]: lxc_health: Link UP Dec 13 01:32:35.067789 systemd-networkd[1249]: lxc_health: Gained carrier Dec 13 01:32:35.525927 systemd-networkd[1249]: lxc2ed9458e35ae: Link UP Dec 13 01:32:35.534432 kernel: eth0: renamed from tmp31f49 Dec 13 01:32:35.541900 systemd-networkd[1249]: lxc2ed9458e35ae: Gained carrier Dec 13 01:32:35.550851 systemd-networkd[1249]: lxca5ebe4bb97c4: Link UP Dec 13 01:32:35.564473 kernel: eth0: renamed from tmp123d8 Dec 13 01:32:35.580142 systemd-networkd[1249]: lxca5ebe4bb97c4: Gained carrier Dec 13 01:32:36.003635 systemd-networkd[1249]: cilium_vxlan: Gained IPv6LL Dec 13 01:32:36.211006 kubelet[3030]: I1213 01:32:36.209796 3030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-cskzn" podStartSLOduration=9.219834277 podStartE2EDuration="19.2097564s" podCreationTimestamp="2024-12-13 01:32:17 +0000 UTC" firstStartedPulling="2024-12-13 01:32:18.275262404 +0000 UTC m=+15.688131388" lastFinishedPulling="2024-12-13 01:32:28.265184527 +0000 UTC m=+25.678053511" observedRunningTime="2024-12-13 01:32:32.841373884 +0000 UTC m=+30.254242948" watchObservedRunningTime="2024-12-13 01:32:36.2097564 +0000 UTC m=+33.622625385" Dec 13 01:32:36.452921 systemd-networkd[1249]: lxc_health: Gained IPv6LL Dec 13 01:32:36.708626 systemd-networkd[1249]: lxca5ebe4bb97c4: Gained IPv6LL Dec 13 01:32:36.708960 systemd-networkd[1249]: lxc2ed9458e35ae: Gained IPv6LL Dec 13 01:32:38.566524 containerd[1627]: time="2024-12-13T01:32:38.564685491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:38.566524 containerd[1627]: time="2024-12-13T01:32:38.564743192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:38.566524 containerd[1627]: time="2024-12-13T01:32:38.564757459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:38.566524 containerd[1627]: time="2024-12-13T01:32:38.565402727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:38.586506 containerd[1627]: time="2024-12-13T01:32:38.582233117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:38.586506 containerd[1627]: time="2024-12-13T01:32:38.585451180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:38.586506 containerd[1627]: time="2024-12-13T01:32:38.585462172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:38.586506 containerd[1627]: time="2024-12-13T01:32:38.585533269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:38.692125 containerd[1627]: time="2024-12-13T01:32:38.692093638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s9fjr,Uid:36202efe-586d-46c6-abee-c24a81da3baa,Namespace:kube-system,Attempt:0,} returns sandbox id \"123d8042fa033b4fb27732016733eb28a79d90119a379e879cd01523ef10f80f\"" Dec 13 01:32:38.694925 containerd[1627]: time="2024-12-13T01:32:38.694828895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nzfsr,Uid:74750cff-eecd-4a69-86d4-bc8f2208d6ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"31f49773599b060dcbc84ceb5c20b54367546ab60a3748a9249d724d19b4aa3b\"" Dec 13 01:32:38.701194 containerd[1627]: time="2024-12-13T01:32:38.701172386Z" level=info msg="CreateContainer within sandbox \"123d8042fa033b4fb27732016733eb28a79d90119a379e879cd01523ef10f80f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:38.705022 containerd[1627]: time="2024-12-13T01:32:38.704927501Z" level=info msg="CreateContainer within sandbox \"31f49773599b060dcbc84ceb5c20b54367546ab60a3748a9249d724d19b4aa3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:38.729255 containerd[1627]: time="2024-12-13T01:32:38.729174450Z" level=info msg="CreateContainer within sandbox \"31f49773599b060dcbc84ceb5c20b54367546ab60a3748a9249d724d19b4aa3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9fdfcf10ada79647292e22cc8a0db4ca3483cd5160d31c152067aff091faca74\"" Dec 13 01:32:38.731902 containerd[1627]: time="2024-12-13T01:32:38.731641331Z" level=info msg="StartContainer for \"9fdfcf10ada79647292e22cc8a0db4ca3483cd5160d31c152067aff091faca74\"" Dec 13 01:32:38.734993 containerd[1627]: time="2024-12-13T01:32:38.734520415Z" level=info msg="CreateContainer within sandbox \"123d8042fa033b4fb27732016733eb28a79d90119a379e879cd01523ef10f80f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d08c6299fc8b328674bf71c09b21bcbd4ba60fa790bd5c261ea2ffbaf4b75e8c\"" Dec 13 01:32:38.736459 containerd[1627]: time="2024-12-13T01:32:38.735657888Z" level=info msg="StartContainer for \"d08c6299fc8b328674bf71c09b21bcbd4ba60fa790bd5c261ea2ffbaf4b75e8c\"" Dec 13 01:32:38.797266 containerd[1627]: time="2024-12-13T01:32:38.796981650Z" level=info msg="StartContainer for \"d08c6299fc8b328674bf71c09b21bcbd4ba60fa790bd5c261ea2ffbaf4b75e8c\" returns successfully" Dec 13 01:32:38.797266 containerd[1627]: time="2024-12-13T01:32:38.797039280Z" level=info msg="StartContainer for \"9fdfcf10ada79647292e22cc8a0db4ca3483cd5160d31c152067aff091faca74\" returns successfully" Dec 13 01:32:38.874827 kubelet[3030]: I1213 01:32:38.874740 3030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-s9fjr" podStartSLOduration=22.874699754 podStartE2EDuration="22.874699754s" podCreationTimestamp="2024-12-13 01:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:38.872697933 +0000 UTC m=+36.285566918" watchObservedRunningTime="2024-12-13 01:32:38.874699754 +0000 UTC m=+36.287568739" Dec 13 01:32:38.874827 kubelet[3030]: I1213 01:32:38.874815 3030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nzfsr" podStartSLOduration=22.87480185 podStartE2EDuration="22.87480185s" podCreationTimestamp="2024-12-13 01:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:38.860545419 +0000 UTC m=+36.273414404" watchObservedRunningTime="2024-12-13 01:32:38.87480185 +0000 UTC m=+36.287670834" Dec 13 01:32:39.584532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248638102.mount: Deactivated successfully. Dec 13 01:33:30.550651 update_engine[1611]: I20241213 01:33:30.550561 1611 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:33:30.550651 update_engine[1611]: I20241213 01:33:30.550634 1611 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:33:30.553115 update_engine[1611]: I20241213 01:33:30.553054 1611 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:33:30.553837 update_engine[1611]: I20241213 01:33:30.553794 1611 omaha_request_params.cc:62] Current group set to stable Dec 13 01:33:30.554073 update_engine[1611]: I20241213 01:33:30.553984 1611 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:33:30.554073 update_engine[1611]: I20241213 01:33:30.554003 1611 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:33:30.554073 update_engine[1611]: I20241213 01:33:30.554020 1611 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:33:30.554073 update_engine[1611]: I20241213 01:33:30.554065 1611 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:33:30.554205 update_engine[1611]: I20241213 01:33:30.554134 1611 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:33:30.554205 update_engine[1611]: I20241213 01:33:30.554144 1611 omaha_request_action.cc:272] Request: Dec 13 01:33:30.554205 update_engine[1611]: Dec 13 01:33:30.554205 update_engine[1611]: Dec 13 01:33:30.554205 update_engine[1611]: Dec 13 01:33:30.554205 update_engine[1611]: Dec 13 01:33:30.554205 update_engine[1611]: Dec 13 01:33:30.554205 update_engine[1611]: Dec 13 01:33:30.554205 update_engine[1611]: Dec 13 01:33:30.554205 update_engine[1611]: Dec 13 01:33:30.554205 update_engine[1611]: I20241213 01:33:30.554153 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:33:30.562688 update_engine[1611]: I20241213 01:33:30.562259 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:33:30.562688 update_engine[1611]: I20241213 01:33:30.562615 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:33:30.564460 update_engine[1611]: E20241213 01:33:30.564277 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:33:30.564460 update_engine[1611]: I20241213 01:33:30.564394 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:33:30.564715 locksmithd[1651]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:33:40.445003 update_engine[1611]: I20241213 01:33:40.444909 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:33:40.446159 update_engine[1611]: I20241213 01:33:40.445176 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:33:40.446159 update_engine[1611]: I20241213 01:33:40.445505 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:33:40.446362 update_engine[1611]: E20241213 01:33:40.446282 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:33:40.446362 update_engine[1611]: I20241213 01:33:40.446341 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 01:33:50.448394 update_engine[1611]: I20241213 01:33:50.448311 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:33:50.449712 update_engine[1611]: I20241213 01:33:50.448704 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:33:50.449712 update_engine[1611]: I20241213 01:33:50.448982 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:33:50.450198 update_engine[1611]: E20241213 01:33:50.449991 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:33:50.450198 update_engine[1611]: I20241213 01:33:50.450100 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 01:34:00.446973 update_engine[1611]: I20241213 01:34:00.446866 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:34:00.447997 update_engine[1611]: I20241213 01:34:00.447254 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:34:00.447997 update_engine[1611]: I20241213 01:34:00.447619 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:34:00.448616 update_engine[1611]: E20241213 01:34:00.448556 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:34:00.448719 update_engine[1611]: I20241213 01:34:00.448642 1611 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:34:00.448719 update_engine[1611]: I20241213 01:34:00.448660 1611 omaha_request_action.cc:617] Omaha request response: Dec 13 01:34:00.448806 update_engine[1611]: E20241213 01:34:00.448784 1611 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 01:34:00.448848 update_engine[1611]: I20241213 01:34:00.448819 1611 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 01:34:00.448848 update_engine[1611]: I20241213 01:34:00.448832 1611 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:34:00.448923 update_engine[1611]: I20241213 01:34:00.448843 1611 update_attempter.cc:306] Processing Done. Dec 13 01:34:00.448923 update_engine[1611]: E20241213 01:34:00.448869 1611 update_attempter.cc:619] Update failed. Dec 13 01:34:00.448923 update_engine[1611]: I20241213 01:34:00.448882 1611 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 01:34:00.448923 update_engine[1611]: I20241213 01:34:00.448894 1611 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 01:34:00.448923 update_engine[1611]: I20241213 01:34:00.448906 1611 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 01:34:00.449132 update_engine[1611]: I20241213 01:34:00.449013 1611 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:34:00.449132 update_engine[1611]: I20241213 01:34:00.449048 1611 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:34:00.449132 update_engine[1611]: I20241213 01:34:00.449060 1611 omaha_request_action.cc:272] Request: Dec 13 01:34:00.449132 update_engine[1611]: Dec 13 01:34:00.449132 update_engine[1611]: Dec 13 01:34:00.449132 update_engine[1611]: Dec 13 01:34:00.449132 update_engine[1611]: Dec 13 01:34:00.449132 update_engine[1611]: Dec 13 01:34:00.449132 update_engine[1611]: Dec 13 01:34:00.449132 update_engine[1611]: I20241213 01:34:00.449074 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:34:00.449681 update_engine[1611]: I20241213 01:34:00.449345 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:34:00.449681 update_engine[1611]: I20241213 01:34:00.449648 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:34:00.450393 locksmithd[1651]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 01:34:00.450940 update_engine[1611]: E20241213 01:34:00.450531 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:34:00.450940 update_engine[1611]: I20241213 01:34:00.450603 1611 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:34:00.450940 update_engine[1611]: I20241213 01:34:00.450618 1611 omaha_request_action.cc:617] Omaha request response: Dec 13 01:34:00.450940 update_engine[1611]: I20241213 01:34:00.450632 1611 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:34:00.450940 update_engine[1611]: I20241213 01:34:00.450644 1611 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:34:00.450940 update_engine[1611]: I20241213 01:34:00.450655 1611 update_attempter.cc:306] Processing Done. Dec 13 01:34:00.450940 update_engine[1611]: I20241213 01:34:00.450667 1611 update_attempter.cc:310] Error event sent. Dec 13 01:34:00.450940 update_engine[1611]: I20241213 01:34:00.450685 1611 update_check_scheduler.cc:74] Next update check in 40m2s Dec 13 01:34:00.451906 locksmithd[1651]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 01:36:51.322895 systemd[1]: Started sshd@7-78.47.218.196:22-147.75.109.163:55058.service - OpenSSH per-connection server daemon (147.75.109.163:55058). Dec 13 01:36:52.342922 sshd[4431]: Accepted publickey for core from 147.75.109.163 port 55058 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:52.345296 sshd[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:52.351457 systemd-logind[1605]: New session 8 of user core. Dec 13 01:36:52.358740 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:36:53.475283 sshd[4431]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:53.480682 systemd-logind[1605]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:36:53.481405 systemd[1]: sshd@7-78.47.218.196:22-147.75.109.163:55058.service: Deactivated successfully. Dec 13 01:36:53.485012 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:36:53.486257 systemd-logind[1605]: Removed session 8. Dec 13 01:36:58.640127 systemd[1]: Started sshd@8-78.47.218.196:22-147.75.109.163:59206.service - OpenSSH per-connection server daemon (147.75.109.163:59206). Dec 13 01:36:59.633345 sshd[4447]: Accepted publickey for core from 147.75.109.163 port 59206 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:59.635875 sshd[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:59.643809 systemd-logind[1605]: New session 9 of user core. Dec 13 01:36:59.647825 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:37:00.376825 sshd[4447]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:00.380947 systemd[1]: sshd@8-78.47.218.196:22-147.75.109.163:59206.service: Deactivated successfully. Dec 13 01:37:00.387831 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:37:00.388937 systemd-logind[1605]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:37:00.390160 systemd-logind[1605]: Removed session 9. Dec 13 01:37:05.542679 systemd[1]: Started sshd@9-78.47.218.196:22-147.75.109.163:59218.service - OpenSSH per-connection server daemon (147.75.109.163:59218). Dec 13 01:37:06.508879 sshd[4465]: Accepted publickey for core from 147.75.109.163 port 59218 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:06.510739 sshd[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:06.516539 systemd-logind[1605]: New session 10 of user core. Dec 13 01:37:06.521715 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:37:07.258980 sshd[4465]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:07.263304 systemd[1]: sshd@9-78.47.218.196:22-147.75.109.163:59218.service: Deactivated successfully. Dec 13 01:37:07.267400 systemd-logind[1605]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:37:07.268113 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:37:07.269553 systemd-logind[1605]: Removed session 10. Dec 13 01:37:07.425693 systemd[1]: Started sshd@10-78.47.218.196:22-147.75.109.163:47326.service - OpenSSH per-connection server daemon (147.75.109.163:47326). Dec 13 01:37:08.401825 sshd[4480]: Accepted publickey for core from 147.75.109.163 port 47326 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:08.404398 sshd[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:08.412595 systemd-logind[1605]: New session 11 of user core. Dec 13 01:37:08.419725 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:37:09.169004 sshd[4480]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:09.173962 systemd-logind[1605]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:37:09.175255 systemd[1]: sshd@10-78.47.218.196:22-147.75.109.163:47326.service: Deactivated successfully. Dec 13 01:37:09.180678 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:37:09.182245 systemd-logind[1605]: Removed session 11. Dec 13 01:37:09.331030 systemd[1]: Started sshd@11-78.47.218.196:22-147.75.109.163:47332.service - OpenSSH per-connection server daemon (147.75.109.163:47332). Dec 13 01:37:10.310899 sshd[4492]: Accepted publickey for core from 147.75.109.163 port 47332 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:10.312676 sshd[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:10.317095 systemd-logind[1605]: New session 12 of user core. Dec 13 01:37:10.322687 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:37:11.057790 sshd[4492]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:11.061979 systemd[1]: sshd@11-78.47.218.196:22-147.75.109.163:47332.service: Deactivated successfully. Dec 13 01:37:11.067906 systemd-logind[1605]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:37:11.069051 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:37:11.071060 systemd-logind[1605]: Removed session 12. Dec 13 01:37:16.226087 systemd[1]: Started sshd@12-78.47.218.196:22-147.75.109.163:39316.service - OpenSSH per-connection server daemon (147.75.109.163:39316). Dec 13 01:37:17.210711 sshd[4506]: Accepted publickey for core from 147.75.109.163 port 39316 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:17.213062 sshd[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:17.219347 systemd-logind[1605]: New session 13 of user core. Dec 13 01:37:17.227459 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:37:17.984352 sshd[4506]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:17.988640 systemd-logind[1605]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:37:17.989565 systemd[1]: sshd@12-78.47.218.196:22-147.75.109.163:39316.service: Deactivated successfully. Dec 13 01:37:17.995750 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:37:17.997003 systemd-logind[1605]: Removed session 13. Dec 13 01:37:18.151902 systemd[1]: Started sshd@13-78.47.218.196:22-147.75.109.163:39328.service - OpenSSH per-connection server daemon (147.75.109.163:39328). Dec 13 01:37:19.128883 sshd[4522]: Accepted publickey for core from 147.75.109.163 port 39328 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:19.131224 sshd[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:19.138123 systemd-logind[1605]: New session 14 of user core. Dec 13 01:37:19.147900 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:37:20.067594 sshd[4522]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:20.072138 systemd[1]: sshd@13-78.47.218.196:22-147.75.109.163:39328.service: Deactivated successfully. Dec 13 01:37:20.076707 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:37:20.077626 systemd-logind[1605]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:37:20.078722 systemd-logind[1605]: Removed session 14. Dec 13 01:37:20.230865 systemd[1]: Started sshd@14-78.47.218.196:22-147.75.109.163:39340.service - OpenSSH per-connection server daemon (147.75.109.163:39340). Dec 13 01:37:21.221821 sshd[4534]: Accepted publickey for core from 147.75.109.163 port 39340 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:21.224950 sshd[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:21.233170 systemd-logind[1605]: New session 15 of user core. Dec 13 01:37:21.239946 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:37:23.179171 sshd[4534]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:23.189352 systemd[1]: sshd@14-78.47.218.196:22-147.75.109.163:39340.service: Deactivated successfully. Dec 13 01:37:23.196386 systemd-logind[1605]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:37:23.198077 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:37:23.199625 systemd-logind[1605]: Removed session 15. Dec 13 01:37:23.345770 systemd[1]: Started sshd@15-78.47.218.196:22-147.75.109.163:39356.service - OpenSSH per-connection server daemon (147.75.109.163:39356). Dec 13 01:37:24.327138 sshd[4554]: Accepted publickey for core from 147.75.109.163 port 39356 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:24.329483 sshd[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:24.335553 systemd-logind[1605]: New session 16 of user core. Dec 13 01:37:24.341841 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:37:25.197829 sshd[4554]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:25.201743 systemd[1]: sshd@15-78.47.218.196:22-147.75.109.163:39356.service: Deactivated successfully. Dec 13 01:37:25.207329 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:37:25.208290 systemd-logind[1605]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:37:25.210866 systemd-logind[1605]: Removed session 16. Dec 13 01:37:25.362788 systemd[1]: Started sshd@16-78.47.218.196:22-147.75.109.163:39362.service - OpenSSH per-connection server daemon (147.75.109.163:39362). Dec 13 01:37:25.812828 systemd[1]: Started sshd@17-78.47.218.196:22-159.223.178.117:59850.service - OpenSSH per-connection server daemon (159.223.178.117:59850). Dec 13 01:37:25.916798 sshd[4568]: Connection closed by 159.223.178.117 port 59850 Dec 13 01:37:25.920806 systemd[1]: sshd@17-78.47.218.196:22-159.223.178.117:59850.service: Deactivated successfully. Dec 13 01:37:26.349438 sshd[4566]: Accepted publickey for core from 147.75.109.163 port 39362 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:26.351830 sshd[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:26.358143 systemd-logind[1605]: New session 17 of user core. Dec 13 01:37:26.363133 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:37:27.098588 sshd[4566]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:27.103596 systemd-logind[1605]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:37:27.104266 systemd[1]: sshd@16-78.47.218.196:22-147.75.109.163:39362.service: Deactivated successfully. Dec 13 01:37:27.108687 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:37:27.109720 systemd-logind[1605]: Removed session 17. Dec 13 01:37:32.266785 systemd[1]: Started sshd@18-78.47.218.196:22-147.75.109.163:52908.service - OpenSSH per-connection server daemon (147.75.109.163:52908). Dec 13 01:37:33.269589 sshd[4587]: Accepted publickey for core from 147.75.109.163 port 52908 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:33.272628 sshd[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:33.281382 systemd-logind[1605]: New session 18 of user core. Dec 13 01:37:33.285940 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:37:34.020566 sshd[4587]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:34.026225 systemd[1]: sshd@18-78.47.218.196:22-147.75.109.163:52908.service: Deactivated successfully. Dec 13 01:37:34.030207 systemd-logind[1605]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:37:34.030516 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:37:34.032270 systemd-logind[1605]: Removed session 18. Dec 13 01:37:39.188731 systemd[1]: Started sshd@19-78.47.218.196:22-147.75.109.163:38420.service - OpenSSH per-connection server daemon (147.75.109.163:38420). Dec 13 01:37:40.173281 sshd[4600]: Accepted publickey for core from 147.75.109.163 port 38420 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:40.175120 sshd[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:40.181301 systemd-logind[1605]: New session 19 of user core. Dec 13 01:37:40.187849 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:37:40.917201 sshd[4600]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:40.922301 systemd[1]: sshd@19-78.47.218.196:22-147.75.109.163:38420.service: Deactivated successfully. Dec 13 01:37:40.927972 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:37:40.928963 systemd-logind[1605]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:37:40.930269 systemd-logind[1605]: Removed session 19. Dec 13 01:37:41.081620 systemd[1]: Started sshd@20-78.47.218.196:22-147.75.109.163:38426.service - OpenSSH per-connection server daemon (147.75.109.163:38426). Dec 13 01:37:42.068115 sshd[4614]: Accepted publickey for core from 147.75.109.163 port 38426 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:42.070018 sshd[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:42.075650 systemd-logind[1605]: New session 20 of user core. Dec 13 01:37:42.081778 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:37:44.048830 containerd[1627]: time="2024-12-13T01:37:44.048757589Z" level=info msg="StopContainer for \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\" with timeout 30 (s)" Dec 13 01:37:44.051186 containerd[1627]: time="2024-12-13T01:37:44.051146897Z" level=info msg="Stop container \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\" with signal terminated" Dec 13 01:37:44.055441 containerd[1627]: time="2024-12-13T01:37:44.054865208Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:37:44.061240 containerd[1627]: time="2024-12-13T01:37:44.061213483Z" level=info msg="StopContainer for \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\" with timeout 2 (s)" Dec 13 01:37:44.061695 containerd[1627]: time="2024-12-13T01:37:44.061668607Z" level=info msg="Stop container \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\" with signal terminated" Dec 13 01:37:44.069703 systemd-networkd[1249]: lxc_health: Link DOWN Dec 13 01:37:44.070181 systemd-networkd[1249]: lxc_health: Lost carrier Dec 13 01:37:44.111478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc-rootfs.mount: Deactivated successfully. Dec 13 01:37:44.121963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976-rootfs.mount: Deactivated successfully. Dec 13 01:37:44.126743 containerd[1627]: time="2024-12-13T01:37:44.126647926Z" level=info msg="shim disconnected" id=08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc namespace=k8s.io Dec 13 01:37:44.126743 containerd[1627]: time="2024-12-13T01:37:44.126707774Z" level=warning msg="cleaning up after shim disconnected" id=08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc namespace=k8s.io Dec 13 01:37:44.126743 containerd[1627]: time="2024-12-13T01:37:44.126716001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:44.127385 containerd[1627]: time="2024-12-13T01:37:44.126842399Z" level=info msg="shim disconnected" id=d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976 namespace=k8s.io Dec 13 01:37:44.127385 containerd[1627]: time="2024-12-13T01:37:44.126862589Z" level=warning msg="cleaning up after shim disconnected" id=d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976 namespace=k8s.io Dec 13 01:37:44.127385 containerd[1627]: time="2024-12-13T01:37:44.126870405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:44.141110 containerd[1627]: time="2024-12-13T01:37:44.141063334Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:37:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:37:44.144326 containerd[1627]: time="2024-12-13T01:37:44.144292223Z" level=info msg="StopContainer for \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\" returns successfully" Dec 13 01:37:44.144845 containerd[1627]: time="2024-12-13T01:37:44.144814871Z" level=info msg="StopContainer for \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\" returns successfully" Dec 13 01:37:44.145083 containerd[1627]: time="2024-12-13T01:37:44.145053941Z" level=info msg="StopPodSandbox for \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\"" Dec 13 01:37:44.145150 containerd[1627]: time="2024-12-13T01:37:44.145083028Z" level=info msg="Container to stop \"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:44.145150 containerd[1627]: time="2024-12-13T01:37:44.145092918Z" level=info msg="Container to stop \"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:44.145150 containerd[1627]: time="2024-12-13T01:37:44.145101034Z" level=info msg="Container to stop \"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:44.145150 containerd[1627]: time="2024-12-13T01:37:44.145108639Z" level=info msg="Container to stop \"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:44.145150 containerd[1627]: time="2024-12-13T01:37:44.145116183Z" level=info msg="Container to stop \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:44.147181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac-shm.mount: Deactivated successfully. Dec 13 01:37:44.147426 containerd[1627]: time="2024-12-13T01:37:44.147344255Z" level=info msg="StopPodSandbox for \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\"" Dec 13 01:37:44.147426 containerd[1627]: time="2024-12-13T01:37:44.147366789Z" level=info msg="Container to stop \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:44.186238 containerd[1627]: time="2024-12-13T01:37:44.186020371Z" level=info msg="shim disconnected" id=b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5 namespace=k8s.io Dec 13 01:37:44.186238 containerd[1627]: time="2024-12-13T01:37:44.186065920Z" level=warning msg="cleaning up after shim disconnected" id=b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5 namespace=k8s.io Dec 13 01:37:44.186238 containerd[1627]: time="2024-12-13T01:37:44.186074477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:44.187683 containerd[1627]: time="2024-12-13T01:37:44.187568625Z" level=info msg="shim disconnected" id=50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac namespace=k8s.io Dec 13 01:37:44.187683 containerd[1627]: time="2024-12-13T01:37:44.187607912Z" level=warning msg="cleaning up after shim disconnected" id=50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac namespace=k8s.io Dec 13 01:37:44.187683 containerd[1627]: time="2024-12-13T01:37:44.187615437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:44.200710 containerd[1627]: time="2024-12-13T01:37:44.200680860Z" level=info msg="TearDown network for sandbox \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\" successfully" Dec 13 01:37:44.200710 containerd[1627]: time="2024-12-13T01:37:44.200705789Z" level=info msg="StopPodSandbox for \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\" returns successfully" Dec 13 01:37:44.200872 containerd[1627]: time="2024-12-13T01:37:44.200850774Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:37:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:37:44.203213 containerd[1627]: time="2024-12-13T01:37:44.203195525Z" level=info msg="TearDown network for sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" successfully" Dec 13 01:37:44.203280 containerd[1627]: time="2024-12-13T01:37:44.203267005Z" level=info msg="StopPodSandbox for \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" returns successfully" Dec 13 01:37:44.220495 kubelet[3030]: I1213 01:37:44.220131 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-cgroup\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.220495 kubelet[3030]: I1213 01:37:44.220171 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-config-path\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.220495 kubelet[3030]: I1213 01:37:44.220188 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-hostproc\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.220495 kubelet[3030]: I1213 01:37:44.220205 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-lib-modules\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.220495 kubelet[3030]: I1213 01:37:44.220218 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cni-path\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.220495 kubelet[3030]: I1213 01:37:44.220235 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a7352c2-33a0-4d60-8314-5b2e610c838f-clustermesh-secrets\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.220966 kubelet[3030]: I1213 01:37:44.220252 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ljrh\" (UniqueName: \"kubernetes.io/projected/1a7352c2-33a0-4d60-8314-5b2e610c838f-kube-api-access-4ljrh\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.220966 kubelet[3030]: I1213 01:37:44.220268 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-xtables-lock\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.220966 kubelet[3030]: I1213 01:37:44.220288 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shwwx\" (UniqueName: \"kubernetes.io/projected/640e382c-db84-4ad9-94c4-fe1ab068573d-kube-api-access-shwwx\") pod \"640e382c-db84-4ad9-94c4-fe1ab068573d\" (UID: \"640e382c-db84-4ad9-94c4-fe1ab068573d\") " Dec 13 01:37:44.220966 kubelet[3030]: I1213 01:37:44.220301 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-bpf-maps\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.220966 kubelet[3030]: I1213 01:37:44.220315 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-host-proc-sys-net\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.220966 kubelet[3030]: I1213 01:37:44.220329 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-etc-cni-netd\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.221090 kubelet[3030]: I1213 01:37:44.220345 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/640e382c-db84-4ad9-94c4-fe1ab068573d-cilium-config-path\") pod \"640e382c-db84-4ad9-94c4-fe1ab068573d\" (UID: \"640e382c-db84-4ad9-94c4-fe1ab068573d\") " Dec 13 01:37:44.221090 kubelet[3030]: I1213 01:37:44.220361 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a7352c2-33a0-4d60-8314-5b2e610c838f-hubble-tls\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.221090 kubelet[3030]: I1213 01:37:44.220376 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-run\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.221090 kubelet[3030]: I1213 01:37:44.220390 3030 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-host-proc-sys-kernel\") pod \"1a7352c2-33a0-4d60-8314-5b2e610c838f\" (UID: \"1a7352c2-33a0-4d60-8314-5b2e610c838f\") " Dec 13 01:37:44.221973 kubelet[3030]: I1213 01:37:44.221094 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:44.222020 kubelet[3030]: I1213 01:37:44.221980 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:44.222183 kubelet[3030]: I1213 01:37:44.222157 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-hostproc" (OuterVolumeSpecName: "hostproc") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:44.222183 kubelet[3030]: I1213 01:37:44.222179 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:44.222245 kubelet[3030]: I1213 01:37:44.222192 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cni-path" (OuterVolumeSpecName: "cni-path") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:44.222889 kubelet[3030]: I1213 01:37:44.220995 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:44.228552 kubelet[3030]: I1213 01:37:44.228455 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:44.228552 kubelet[3030]: I1213 01:37:44.228481 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:44.228552 kubelet[3030]: I1213 01:37:44.228496 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:44.230250 kubelet[3030]: I1213 01:37:44.230234 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:44.231819 kubelet[3030]: I1213 01:37:44.231800 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/640e382c-db84-4ad9-94c4-fe1ab068573d-kube-api-access-shwwx" (OuterVolumeSpecName: "kube-api-access-shwwx") pod "640e382c-db84-4ad9-94c4-fe1ab068573d" (UID: "640e382c-db84-4ad9-94c4-fe1ab068573d"). InnerVolumeSpecName "kube-api-access-shwwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:44.236233 kubelet[3030]: I1213 01:37:44.236215 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7352c2-33a0-4d60-8314-5b2e610c838f-kube-api-access-4ljrh" (OuterVolumeSpecName: "kube-api-access-4ljrh") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "kube-api-access-4ljrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:44.236320 kubelet[3030]: I1213 01:37:44.236270 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7352c2-33a0-4d60-8314-5b2e610c838f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:44.236694 kubelet[3030]: I1213 01:37:44.236673 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7352c2-33a0-4d60-8314-5b2e610c838f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:37:44.236864 kubelet[3030]: I1213 01:37:44.236823 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a7352c2-33a0-4d60-8314-5b2e610c838f" (UID: "1a7352c2-33a0-4d60-8314-5b2e610c838f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:37:44.237058 kubelet[3030]: I1213 01:37:44.237034 3030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/640e382c-db84-4ad9-94c4-fe1ab068573d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "640e382c-db84-4ad9-94c4-fe1ab068573d" (UID: "640e382c-db84-4ad9-94c4-fe1ab068573d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:37:44.320911 kubelet[3030]: I1213 01:37:44.320799 3030 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-shwwx\" (UniqueName: \"kubernetes.io/projected/640e382c-db84-4ad9-94c4-fe1ab068573d-kube-api-access-shwwx\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.320911 kubelet[3030]: I1213 01:37:44.320828 3030 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-bpf-maps\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.320911 kubelet[3030]: I1213 01:37:44.320842 3030 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-host-proc-sys-net\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.320911 kubelet[3030]: I1213 01:37:44.320854 3030 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-etc-cni-netd\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.320911 kubelet[3030]: I1213 01:37:44.320866 3030 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/640e382c-db84-4ad9-94c4-fe1ab068573d-cilium-config-path\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.320911 kubelet[3030]: I1213 01:37:44.320877 3030 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a7352c2-33a0-4d60-8314-5b2e610c838f-hubble-tls\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.320911 kubelet[3030]: I1213 01:37:44.320888 3030 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-run\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.320911 kubelet[3030]: I1213 01:37:44.320901 3030 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-host-proc-sys-kernel\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.321243 kubelet[3030]: I1213 01:37:44.320912 3030 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-cgroup\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.321243 kubelet[3030]: I1213 01:37:44.320923 3030 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7352c2-33a0-4d60-8314-5b2e610c838f-cilium-config-path\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.321243 kubelet[3030]: I1213 01:37:44.320936 3030 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-hostproc\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.321243 kubelet[3030]: I1213 01:37:44.320947 3030 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-lib-modules\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.321243 kubelet[3030]: I1213 01:37:44.320956 3030 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-cni-path\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.321243 kubelet[3030]: I1213 01:37:44.320966 3030 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a7352c2-33a0-4d60-8314-5b2e610c838f-clustermesh-secrets\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.321243 kubelet[3030]: I1213 01:37:44.320977 3030 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4ljrh\" (UniqueName: \"kubernetes.io/projected/1a7352c2-33a0-4d60-8314-5b2e610c838f-kube-api-access-4ljrh\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.321243 kubelet[3030]: I1213 01:37:44.320990 3030 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a7352c2-33a0-4d60-8314-5b2e610c838f-xtables-lock\") on node \"ci-4081-2-1-e-82283d2dd3\" DevicePath \"\"" Dec 13 01:37:44.480571 kubelet[3030]: I1213 01:37:44.479770 3030 scope.go:117] "RemoveContainer" containerID="d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976" Dec 13 01:37:44.516366 containerd[1627]: time="2024-12-13T01:37:44.516044019Z" level=info msg="RemoveContainer for \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\"" Dec 13 01:37:44.521811 containerd[1627]: time="2024-12-13T01:37:44.521785139Z" level=info msg="RemoveContainer for \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\" returns successfully" Dec 13 01:37:44.522079 kubelet[3030]: I1213 01:37:44.522045 3030 scope.go:117] "RemoveContainer" containerID="4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e" Dec 13 01:37:44.522834 containerd[1627]: time="2024-12-13T01:37:44.522796838Z" level=info msg="RemoveContainer for \"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e\"" Dec 13 01:37:44.525594 containerd[1627]: time="2024-12-13T01:37:44.525570072Z" level=info msg="RemoveContainer for \"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e\" returns successfully" Dec 13 01:37:44.525790 kubelet[3030]: I1213 01:37:44.525722 3030 scope.go:117] "RemoveContainer" containerID="47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7" Dec 13 01:37:44.526925 containerd[1627]: time="2024-12-13T01:37:44.526850580Z" level=info msg="RemoveContainer for \"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7\"" Dec 13 01:37:44.529588 containerd[1627]: time="2024-12-13T01:37:44.529562703Z" level=info msg="RemoveContainer for \"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7\" returns successfully" Dec 13 01:37:44.529681 kubelet[3030]: I1213 01:37:44.529660 3030 scope.go:117] "RemoveContainer" containerID="79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00" Dec 13 01:37:44.530566 containerd[1627]: time="2024-12-13T01:37:44.530532079Z" level=info msg="RemoveContainer for \"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00\"" Dec 13 01:37:44.532870 containerd[1627]: time="2024-12-13T01:37:44.532846600Z" level=info msg="RemoveContainer for \"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00\" returns successfully" Dec 13 01:37:44.533015 kubelet[3030]: I1213 01:37:44.533002 3030 scope.go:117] "RemoveContainer" containerID="198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc" Dec 13 01:37:44.533979 containerd[1627]: time="2024-12-13T01:37:44.533945080Z" level=info msg="RemoveContainer for \"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc\"" Dec 13 01:37:44.538592 containerd[1627]: time="2024-12-13T01:37:44.538559215Z" level=info msg="RemoveContainer for \"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc\" returns successfully" Dec 13 01:37:44.538749 kubelet[3030]: I1213 01:37:44.538691 3030 scope.go:117] "RemoveContainer" containerID="d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976" Dec 13 01:37:44.543849 containerd[1627]: time="2024-12-13T01:37:44.538819165Z" level=error msg="ContainerStatus for \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\": not found" Dec 13 01:37:44.550193 kubelet[3030]: E1213 01:37:44.550168 3030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\": not found" containerID="d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976" Dec 13 01:37:44.555899 kubelet[3030]: I1213 01:37:44.555874 3030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976"} err="failed to get container status \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\": rpc error: code = NotFound desc = an error occurred when try to find container \"d61138f95e49d1922e2cbc6d8eb5db1a0ce38dc787927a43e28d112df5b57976\": not found" Dec 13 01:37:44.555899 kubelet[3030]: I1213 01:37:44.555895 3030 scope.go:117] "RemoveContainer" containerID="4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e" Dec 13 01:37:44.556081 containerd[1627]: time="2024-12-13T01:37:44.556043486Z" level=error msg="ContainerStatus for \"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e\": not found" Dec 13 01:37:44.556379 kubelet[3030]: E1213 01:37:44.556140 3030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e\": not found" containerID="4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e" Dec 13 01:37:44.556379 kubelet[3030]: I1213 01:37:44.556161 3030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e"} err="failed to get container status \"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4491fa88607f3e526d0001aeaae82e4a6080fe5bc0796d4bcd92e3226f49e53e\": not found" Dec 13 01:37:44.556379 kubelet[3030]: I1213 01:37:44.556171 3030 scope.go:117] "RemoveContainer" containerID="47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7" Dec 13 01:37:44.556580 containerd[1627]: time="2024-12-13T01:37:44.556471168Z" level=error msg="ContainerStatus for \"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7\": not found" Dec 13 01:37:44.556680 kubelet[3030]: E1213 01:37:44.556661 3030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7\": not found" containerID="47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7" Dec 13 01:37:44.556729 kubelet[3030]: I1213 01:37:44.556707 3030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7"} err="failed to get container status \"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"47ad1d8518a787c3c77afe326b2db039fa3d42e564793afe0cd838f3b7a449a7\": not found" Dec 13 01:37:44.556729 kubelet[3030]: I1213 01:37:44.556715 3030 scope.go:117] "RemoveContainer" containerID="79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00" Dec 13 01:37:44.556958 containerd[1627]: time="2024-12-13T01:37:44.556931453Z" level=error msg="ContainerStatus for \"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00\": not found" Dec 13 01:37:44.557039 kubelet[3030]: E1213 01:37:44.557026 3030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00\": not found" containerID="79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00" Dec 13 01:37:44.557070 kubelet[3030]: I1213 01:37:44.557046 3030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00"} err="failed to get container status \"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00\": rpc error: code = NotFound desc = an error occurred when try to find container \"79fdd3b46073a3ffb3dc9bca5d7ab5f8ce30238180e79c61ea86ab478002ea00\": not found" Dec 13 01:37:44.557070 kubelet[3030]: I1213 01:37:44.557053 3030 scope.go:117] "RemoveContainer" containerID="198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc" Dec 13 01:37:44.557247 containerd[1627]: time="2024-12-13T01:37:44.557219729Z" level=error msg="ContainerStatus for \"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc\": not found" Dec 13 01:37:44.557446 kubelet[3030]: E1213 01:37:44.557368 3030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc\": not found" containerID="198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc" Dec 13 01:37:44.557446 kubelet[3030]: I1213 01:37:44.557398 3030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc"} err="failed to get container status \"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"198a5a1b43c21205c54621e3cb185498e9c3f519f22862fed9e92052e3df73cc\": not found" Dec 13 01:37:44.557446 kubelet[3030]: I1213 01:37:44.557406 3030 scope.go:117] "RemoveContainer" containerID="08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc" Dec 13 01:37:44.558904 containerd[1627]: time="2024-12-13T01:37:44.558813995Z" level=info msg="RemoveContainer for \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\"" Dec 13 01:37:44.564885 containerd[1627]: time="2024-12-13T01:37:44.564856417Z" level=info msg="RemoveContainer for \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\" returns successfully" Dec 13 01:37:44.564993 kubelet[3030]: I1213 01:37:44.564965 3030 scope.go:117] "RemoveContainer" containerID="08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc" Dec 13 01:37:44.565152 containerd[1627]: time="2024-12-13T01:37:44.565125567Z" level=error msg="ContainerStatus for \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\": not found" Dec 13 01:37:44.565248 kubelet[3030]: E1213 01:37:44.565229 3030 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\": not found" containerID="08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc" Dec 13 01:37:44.565294 kubelet[3030]: I1213 01:37:44.565253 3030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc"} err="failed to get container status \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"08fe42a46fabde6dbedf7395200c4cc86e7aed2e235d4835b5789276403530fc\": not found" Dec 13 01:37:44.695960 kubelet[3030]: I1213 01:37:44.695928 3030 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1a7352c2-33a0-4d60-8314-5b2e610c838f" path="/var/lib/kubelet/pods/1a7352c2-33a0-4d60-8314-5b2e610c838f/volumes" Dec 13 01:37:44.697145 kubelet[3030]: I1213 01:37:44.697122 3030 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="640e382c-db84-4ad9-94c4-fe1ab068573d" path="/var/lib/kubelet/pods/640e382c-db84-4ad9-94c4-fe1ab068573d/volumes" Dec 13 01:37:45.032073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac-rootfs.mount: Deactivated successfully. Dec 13 01:37:45.032314 systemd[1]: var-lib-kubelet-pods-1a7352c2\x2d33a0\x2d4d60\x2d8314\x2d5b2e610c838f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4ljrh.mount: Deactivated successfully. Dec 13 01:37:45.032881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5-rootfs.mount: Deactivated successfully. Dec 13 01:37:45.033135 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5-shm.mount: Deactivated successfully. Dec 13 01:37:45.033393 systemd[1]: var-lib-kubelet-pods-640e382c\x2ddb84\x2d4ad9\x2d94c4\x2dfe1ab068573d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dshwwx.mount: Deactivated successfully. Dec 13 01:37:45.033572 systemd[1]: var-lib-kubelet-pods-1a7352c2\x2d33a0\x2d4d60\x2d8314\x2d5b2e610c838f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:37:45.033731 systemd[1]: var-lib-kubelet-pods-1a7352c2\x2d33a0\x2d4d60\x2d8314\x2d5b2e610c838f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:37:46.117842 sshd[4614]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:46.123736 systemd-logind[1605]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:37:46.124249 systemd[1]: sshd@20-78.47.218.196:22-147.75.109.163:38426.service: Deactivated successfully. Dec 13 01:37:46.129751 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:37:46.131037 systemd-logind[1605]: Removed session 20. Dec 13 01:37:46.288787 systemd[1]: Started sshd@21-78.47.218.196:22-147.75.109.163:55302.service - OpenSSH per-connection server daemon (147.75.109.163:55302). Dec 13 01:37:47.268295 sshd[4778]: Accepted publickey for core from 147.75.109.163 port 55302 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:47.270076 sshd[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:47.275023 systemd-logind[1605]: New session 21 of user core. Dec 13 01:37:47.278721 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:37:47.858595 kubelet[3030]: E1213 01:37:47.858561 3030 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:37:48.115939 kubelet[3030]: I1213 01:37:48.115629 3030 topology_manager.go:215] "Topology Admit Handler" podUID="0d9eb341-66f3-4110-bea8-4a62e6b13833" podNamespace="kube-system" podName="cilium-hqvhl" Dec 13 01:37:48.121890 kubelet[3030]: E1213 01:37:48.121727 3030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a7352c2-33a0-4d60-8314-5b2e610c838f" containerName="mount-cgroup" Dec 13 01:37:48.121890 kubelet[3030]: E1213 01:37:48.121773 3030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a7352c2-33a0-4d60-8314-5b2e610c838f" containerName="mount-bpf-fs" Dec 13 01:37:48.121890 kubelet[3030]: E1213 01:37:48.121783 3030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="640e382c-db84-4ad9-94c4-fe1ab068573d" containerName="cilium-operator" Dec 13 01:37:48.121890 kubelet[3030]: E1213 01:37:48.121790 3030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a7352c2-33a0-4d60-8314-5b2e610c838f" containerName="apply-sysctl-overwrites" Dec 13 01:37:48.121890 kubelet[3030]: E1213 01:37:48.121796 3030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a7352c2-33a0-4d60-8314-5b2e610c838f" containerName="clean-cilium-state" Dec 13 01:37:48.121890 kubelet[3030]: E1213 01:37:48.121801 3030 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a7352c2-33a0-4d60-8314-5b2e610c838f" containerName="cilium-agent" Dec 13 01:37:48.124472 kubelet[3030]: I1213 01:37:48.124449 3030 memory_manager.go:354] "RemoveStaleState removing state" podUID="640e382c-db84-4ad9-94c4-fe1ab068573d" containerName="cilium-operator" Dec 13 01:37:48.124648 kubelet[3030]: I1213 01:37:48.124588 3030 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7352c2-33a0-4d60-8314-5b2e610c838f" containerName="cilium-agent" Dec 13 01:37:48.141669 kubelet[3030]: I1213 01:37:48.141640 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d9eb341-66f3-4110-bea8-4a62e6b13833-bpf-maps\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141750 kubelet[3030]: I1213 01:37:48.141678 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d9eb341-66f3-4110-bea8-4a62e6b13833-cni-path\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141750 kubelet[3030]: I1213 01:37:48.141701 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d9eb341-66f3-4110-bea8-4a62e6b13833-cilium-config-path\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141750 kubelet[3030]: I1213 01:37:48.141717 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d9eb341-66f3-4110-bea8-4a62e6b13833-hostproc\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141750 kubelet[3030]: I1213 01:37:48.141733 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d9eb341-66f3-4110-bea8-4a62e6b13833-hubble-tls\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141750 kubelet[3030]: I1213 01:37:48.141749 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d9eb341-66f3-4110-bea8-4a62e6b13833-xtables-lock\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141853 kubelet[3030]: I1213 01:37:48.141765 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d9eb341-66f3-4110-bea8-4a62e6b13833-cilium-ipsec-secrets\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141853 kubelet[3030]: I1213 01:37:48.141782 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d9eb341-66f3-4110-bea8-4a62e6b13833-host-proc-sys-kernel\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141853 kubelet[3030]: I1213 01:37:48.141798 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d9eb341-66f3-4110-bea8-4a62e6b13833-cilium-cgroup\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141853 kubelet[3030]: I1213 01:37:48.141812 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d9eb341-66f3-4110-bea8-4a62e6b13833-lib-modules\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141853 kubelet[3030]: I1213 01:37:48.141828 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d9eb341-66f3-4110-bea8-4a62e6b13833-clustermesh-secrets\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141958 kubelet[3030]: I1213 01:37:48.141844 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d9eb341-66f3-4110-bea8-4a62e6b13833-etc-cni-netd\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141958 kubelet[3030]: I1213 01:37:48.141860 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d9eb341-66f3-4110-bea8-4a62e6b13833-cilium-run\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141958 kubelet[3030]: I1213 01:37:48.141876 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkhbg\" (UniqueName: \"kubernetes.io/projected/0d9eb341-66f3-4110-bea8-4a62e6b13833-kube-api-access-gkhbg\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.141958 kubelet[3030]: I1213 01:37:48.141894 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d9eb341-66f3-4110-bea8-4a62e6b13833-host-proc-sys-net\") pod \"cilium-hqvhl\" (UID: \"0d9eb341-66f3-4110-bea8-4a62e6b13833\") " pod="kube-system/cilium-hqvhl" Dec 13 01:37:48.343681 sshd[4778]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:48.348380 systemd[1]: sshd@21-78.47.218.196:22-147.75.109.163:55302.service: Deactivated successfully. Dec 13 01:37:48.352700 systemd-logind[1605]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:37:48.353019 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:37:48.355642 systemd-logind[1605]: Removed session 21. Dec 13 01:37:48.433835 containerd[1627]: time="2024-12-13T01:37:48.433751120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hqvhl,Uid:0d9eb341-66f3-4110-bea8-4a62e6b13833,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:48.462789 containerd[1627]: time="2024-12-13T01:37:48.462547743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:48.462789 containerd[1627]: time="2024-12-13T01:37:48.462597380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:48.462789 containerd[1627]: time="2024-12-13T01:37:48.462614194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:48.462789 containerd[1627]: time="2024-12-13T01:37:48.462723368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:48.499380 containerd[1627]: time="2024-12-13T01:37:48.499348163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hqvhl,Uid:0d9eb341-66f3-4110-bea8-4a62e6b13833,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ee269c7224c23f075695fc902031c284104e0c5df73eea87a3c40a04526883e\"" Dec 13 01:37:48.507240 containerd[1627]: time="2024-12-13T01:37:48.507149345Z" level=info msg="CreateContainer within sandbox \"0ee269c7224c23f075695fc902031c284104e0c5df73eea87a3c40a04526883e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:37:48.508205 systemd[1]: Started sshd@22-78.47.218.196:22-147.75.109.163:55314.service - OpenSSH per-connection server daemon (147.75.109.163:55314). Dec 13 01:37:48.525554 containerd[1627]: time="2024-12-13T01:37:48.525504858Z" level=info msg="CreateContainer within sandbox \"0ee269c7224c23f075695fc902031c284104e0c5df73eea87a3c40a04526883e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f5a8ae75606b19cb9d556bebb6b910eac9cb5083f7ce8f0e0b0fc46165cdb7e\"" Dec 13 01:37:48.526824 containerd[1627]: time="2024-12-13T01:37:48.526248819Z" level=info msg="StartContainer for \"9f5a8ae75606b19cb9d556bebb6b910eac9cb5083f7ce8f0e0b0fc46165cdb7e\"" Dec 13 01:37:48.575525 containerd[1627]: time="2024-12-13T01:37:48.575349724Z" level=info msg="StartContainer for \"9f5a8ae75606b19cb9d556bebb6b910eac9cb5083f7ce8f0e0b0fc46165cdb7e\" returns successfully" Dec 13 01:37:48.607658 containerd[1627]: time="2024-12-13T01:37:48.607617066Z" level=info msg="shim disconnected" id=9f5a8ae75606b19cb9d556bebb6b910eac9cb5083f7ce8f0e0b0fc46165cdb7e namespace=k8s.io Dec 13 01:37:48.607909 containerd[1627]: time="2024-12-13T01:37:48.607866136Z" level=warning msg="cleaning up after shim disconnected" id=9f5a8ae75606b19cb9d556bebb6b910eac9cb5083f7ce8f0e0b0fc46165cdb7e namespace=k8s.io Dec 13 01:37:48.607909 containerd[1627]: time="2024-12-13T01:37:48.607886185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:49.480450 sshd[4838]: Accepted publickey for core from 147.75.109.163 port 55314 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:49.482737 sshd[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:49.490697 systemd-logind[1605]: New session 22 of user core. Dec 13 01:37:49.495452 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:37:49.522808 containerd[1627]: time="2024-12-13T01:37:49.520339945Z" level=info msg="CreateContainer within sandbox \"0ee269c7224c23f075695fc902031c284104e0c5df73eea87a3c40a04526883e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:37:49.535406 containerd[1627]: time="2024-12-13T01:37:49.535284570Z" level=info msg="CreateContainer within sandbox \"0ee269c7224c23f075695fc902031c284104e0c5df73eea87a3c40a04526883e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4c43b0034284458fea1b3b2dbd286b11c8c0b8401d818c7e39b93ee2f1ee0255\"" Dec 13 01:37:49.536395 containerd[1627]: time="2024-12-13T01:37:49.536003933Z" level=info msg="StartContainer for \"4c43b0034284458fea1b3b2dbd286b11c8c0b8401d818c7e39b93ee2f1ee0255\"" Dec 13 01:37:49.588466 containerd[1627]: time="2024-12-13T01:37:49.588359999Z" level=info msg="StartContainer for \"4c43b0034284458fea1b3b2dbd286b11c8c0b8401d818c7e39b93ee2f1ee0255\" returns successfully" Dec 13 01:37:49.614067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c43b0034284458fea1b3b2dbd286b11c8c0b8401d818c7e39b93ee2f1ee0255-rootfs.mount: Deactivated successfully. Dec 13 01:37:49.618818 containerd[1627]: time="2024-12-13T01:37:49.618760804Z" level=info msg="shim disconnected" id=4c43b0034284458fea1b3b2dbd286b11c8c0b8401d818c7e39b93ee2f1ee0255 namespace=k8s.io Dec 13 01:37:49.618818 containerd[1627]: time="2024-12-13T01:37:49.618816403Z" level=warning msg="cleaning up after shim disconnected" id=4c43b0034284458fea1b3b2dbd286b11c8c0b8401d818c7e39b93ee2f1ee0255 namespace=k8s.io Dec 13 01:37:49.619039 containerd[1627]: time="2024-12-13T01:37:49.618827274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:50.160711 sshd[4838]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:50.165810 systemd[1]: sshd@22-78.47.218.196:22-147.75.109.163:55314.service: Deactivated successfully. Dec 13 01:37:50.174254 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:37:50.175896 systemd-logind[1605]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:37:50.177657 systemd-logind[1605]: Removed session 22. Dec 13 01:37:50.301482 kubelet[3030]: I1213 01:37:50.297734 3030 setters.go:568] "Node became not ready" node="ci-4081-2-1-e-82283d2dd3" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:37:50Z","lastTransitionTime":"2024-12-13T01:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:37:50.328795 systemd[1]: Started sshd@23-78.47.218.196:22-147.75.109.163:55330.service - OpenSSH per-connection server daemon (147.75.109.163:55330). Dec 13 01:37:50.527897 containerd[1627]: time="2024-12-13T01:37:50.527337579Z" level=info msg="CreateContainer within sandbox \"0ee269c7224c23f075695fc902031c284104e0c5df73eea87a3c40a04526883e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:37:50.560222 containerd[1627]: time="2024-12-13T01:37:50.560032762Z" level=info msg="CreateContainer within sandbox \"0ee269c7224c23f075695fc902031c284104e0c5df73eea87a3c40a04526883e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc011d6a78fe9a40f7c5763b4cbf13bb05457e46b217ed22a90a6632c2a579c6\"" Dec 13 01:37:50.562407 containerd[1627]: time="2024-12-13T01:37:50.561916382Z" level=info msg="StartContainer for \"bc011d6a78fe9a40f7c5763b4cbf13bb05457e46b217ed22a90a6632c2a579c6\"" Dec 13 01:37:50.624565 containerd[1627]: time="2024-12-13T01:37:50.624476400Z" level=info msg="StartContainer for \"bc011d6a78fe9a40f7c5763b4cbf13bb05457e46b217ed22a90a6632c2a579c6\" returns successfully" Dec 13 01:37:50.649257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc011d6a78fe9a40f7c5763b4cbf13bb05457e46b217ed22a90a6632c2a579c6-rootfs.mount: Deactivated successfully. Dec 13 01:37:50.655178 containerd[1627]: time="2024-12-13T01:37:50.655128178Z" level=info msg="shim disconnected" id=bc011d6a78fe9a40f7c5763b4cbf13bb05457e46b217ed22a90a6632c2a579c6 namespace=k8s.io Dec 13 01:37:50.655178 containerd[1627]: time="2024-12-13T01:37:50.655171664Z" level=warning msg="cleaning up after shim disconnected" id=bc011d6a78fe9a40f7c5763b4cbf13bb05457e46b217ed22a90a6632c2a579c6 namespace=k8s.io Dec 13 01:37:50.655178 containerd[1627]: time="2024-12-13T01:37:50.655179740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:51.319383 sshd[4969]: Accepted publickey for core from 147.75.109.163 port 55330 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:51.321069 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:51.326318 systemd-logind[1605]: New session 23 of user core. Dec 13 01:37:51.334068 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:37:51.528676 containerd[1627]: time="2024-12-13T01:37:51.528632817Z" level=info msg="CreateContainer within sandbox \"0ee269c7224c23f075695fc902031c284104e0c5df73eea87a3c40a04526883e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:37:51.541353 containerd[1627]: time="2024-12-13T01:37:51.541268557Z" level=info msg="CreateContainer within sandbox \"0ee269c7224c23f075695fc902031c284104e0c5df73eea87a3c40a04526883e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b97fa4ff98848001aa21eae71f252fef9b94b51d9d959c5352e6894ee83054ba\"" Dec 13 01:37:51.542130 containerd[1627]: time="2024-12-13T01:37:51.542106745Z" level=info msg="StartContainer for \"b97fa4ff98848001aa21eae71f252fef9b94b51d9d959c5352e6894ee83054ba\"" Dec 13 01:37:51.596572 containerd[1627]: time="2024-12-13T01:37:51.595660681Z" level=info msg="StartContainer for \"b97fa4ff98848001aa21eae71f252fef9b94b51d9d959c5352e6894ee83054ba\" returns successfully" Dec 13 01:37:51.614024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b97fa4ff98848001aa21eae71f252fef9b94b51d9d959c5352e6894ee83054ba-rootfs.mount: Deactivated successfully. Dec 13 01:37:51.616256 containerd[1627]: time="2024-12-13T01:37:51.616186771Z" level=info msg="shim disconnected" id=b97fa4ff98848001aa21eae71f252fef9b94b51d9d959c5352e6894ee83054ba namespace=k8s.io Dec 13 01:37:51.616256 containerd[1627]: time="2024-12-13T01:37:51.616236870Z" level=warning msg="cleaning up after shim disconnected" id=b97fa4ff98848001aa21eae71f252fef9b94b51d9d959c5352e6894ee83054ba namespace=k8s.io Dec 13 01:37:51.616256 containerd[1627]: time="2024-12-13T01:37:51.616245096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:52.534782 containerd[1627]: time="2024-12-13T01:37:52.534599313Z" level=info msg="CreateContainer within sandbox \"0ee269c7224c23f075695fc902031c284104e0c5df73eea87a3c40a04526883e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:37:52.562705 containerd[1627]: time="2024-12-13T01:37:52.562659304Z" level=info msg="CreateContainer within sandbox \"0ee269c7224c23f075695fc902031c284104e0c5df73eea87a3c40a04526883e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96099518d92ffe293a609d8023716fe89bd1fc6baca378c2e1aa1861a0198baf\"" Dec 13 01:37:52.563496 containerd[1627]: time="2024-12-13T01:37:52.563335271Z" level=info msg="StartContainer for \"96099518d92ffe293a609d8023716fe89bd1fc6baca378c2e1aa1861a0198baf\"" Dec 13 01:37:52.593041 systemd[1]: run-containerd-runc-k8s.io-96099518d92ffe293a609d8023716fe89bd1fc6baca378c2e1aa1861a0198baf-runc.2jKw8Q.mount: Deactivated successfully. Dec 13 01:37:52.621827 containerd[1627]: time="2024-12-13T01:37:52.621685713Z" level=info msg="StartContainer for \"96099518d92ffe293a609d8023716fe89bd1fc6baca378c2e1aa1861a0198baf\" returns successfully" Dec 13 01:37:53.128528 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:37:55.940479 systemd-networkd[1249]: lxc_health: Link UP Dec 13 01:37:55.947624 systemd-networkd[1249]: lxc_health: Gained carrier Dec 13 01:37:56.472190 kubelet[3030]: I1213 01:37:56.471691 3030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hqvhl" podStartSLOduration=8.471657652 podStartE2EDuration="8.471657652s" podCreationTimestamp="2024-12-13 01:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:37:53.555898349 +0000 UTC m=+350.968767343" watchObservedRunningTime="2024-12-13 01:37:56.471657652 +0000 UTC m=+353.884526636" Dec 13 01:37:57.924608 systemd-networkd[1249]: lxc_health: Gained IPv6LL Dec 13 01:37:58.725782 systemd[1]: run-containerd-runc-k8s.io-96099518d92ffe293a609d8023716fe89bd1fc6baca378c2e1aa1861a0198baf-runc.O5T3EA.mount: Deactivated successfully. Dec 13 01:38:02.706837 containerd[1627]: time="2024-12-13T01:38:02.706800192Z" level=info msg="StopPodSandbox for \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\"" Dec 13 01:38:02.707292 containerd[1627]: time="2024-12-13T01:38:02.706894367Z" level=info msg="TearDown network for sandbox \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\" successfully" Dec 13 01:38:02.707292 containerd[1627]: time="2024-12-13T01:38:02.706906821Z" level=info msg="StopPodSandbox for \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\" returns successfully" Dec 13 01:38:02.707943 containerd[1627]: time="2024-12-13T01:38:02.707636412Z" level=info msg="RemovePodSandbox for \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\"" Dec 13 01:38:02.707943 containerd[1627]: time="2024-12-13T01:38:02.707694075Z" level=info msg="Forcibly stopping sandbox \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\"" Dec 13 01:38:02.707943 containerd[1627]: time="2024-12-13T01:38:02.707749584Z" level=info msg="TearDown network for sandbox \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\" successfully" Dec 13 01:38:02.711654 containerd[1627]: time="2024-12-13T01:38:02.711620916Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:38:02.711760 containerd[1627]: time="2024-12-13T01:38:02.711666014Z" level=info msg="RemovePodSandbox \"b63368eee4a18a3677a630da94a0d0a977d6a7f6409bbda111a15937c5d2e5f5\" returns successfully" Dec 13 01:38:02.712252 containerd[1627]: time="2024-12-13T01:38:02.712007024Z" level=info msg="StopPodSandbox for \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\"" Dec 13 01:38:02.712252 containerd[1627]: time="2024-12-13T01:38:02.712087452Z" level=info msg="TearDown network for sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" successfully" Dec 13 01:38:02.712252 containerd[1627]: time="2024-12-13T01:38:02.712100046Z" level=info msg="StopPodSandbox for \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" returns successfully" Dec 13 01:38:02.712384 containerd[1627]: time="2024-12-13T01:38:02.712343132Z" level=info msg="RemovePodSandbox for \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\"" Dec 13 01:38:02.712384 containerd[1627]: time="2024-12-13T01:38:02.712364504Z" level=info msg="Forcibly stopping sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\"" Dec 13 01:38:02.712456 containerd[1627]: time="2024-12-13T01:38:02.712443520Z" level=info msg="TearDown network for sandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" successfully" Dec 13 01:38:02.715183 containerd[1627]: time="2024-12-13T01:38:02.715152954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:38:02.715240 containerd[1627]: time="2024-12-13T01:38:02.715221939Z" level=info msg="RemovePodSandbox \"50f0b241c3f5e723e228741712ea04bea934336e21fe6ddc0e9d3b5b2d5f7cac\" returns successfully" Dec 13 01:38:05.443787 sshd[4969]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:05.447139 systemd[1]: sshd@23-78.47.218.196:22-147.75.109.163:55330.service: Deactivated successfully. Dec 13 01:38:05.451236 systemd-logind[1605]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:38:05.451909 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:38:05.454991 systemd-logind[1605]: Removed session 23.