Dec 13 01:27:10.104510 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:27:10.104548 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:27:10.104563 kernel: BIOS-provided physical RAM map: Dec 13 01:27:10.104569 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:27:10.104575 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:27:10.104581 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:27:10.104589 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:27:10.104595 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:27:10.104601 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:27:10.104607 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:27:10.104619 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:27:10.104625 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Dec 13 01:27:10.104632 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Dec 13 01:27:10.104638 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Dec 13 01:27:10.104648 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:27:10.104655 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:27:10.104665 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:27:10.104672 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:27:10.104678 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:27:10.104685 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:27:10.104692 kernel: NX (Execute Disable) protection: active Dec 13 01:27:10.104699 kernel: APIC: Static calls initialized Dec 13 01:27:10.104705 kernel: efi: EFI v2.7 by EDK II Dec 13 01:27:10.104712 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Dec 13 01:27:10.104719 kernel: SMBIOS 2.8 present. Dec 13 01:27:10.104726 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:27:10.104732 kernel: Hypervisor detected: KVM Dec 13 01:27:10.104741 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:27:10.104748 kernel: kvm-clock: using sched offset of 6203196955 cycles Dec 13 01:27:10.104755 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:27:10.104762 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:27:10.104770 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:27:10.104777 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:27:10.104784 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:27:10.104791 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 01:27:10.104798 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:27:10.104808 kernel: Using GB pages for direct mapping Dec 13 01:27:10.104815 kernel: Secure boot disabled Dec 13 01:27:10.104822 kernel: ACPI: Early table checksum verification disabled Dec 13 01:27:10.104829 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:27:10.104842 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:27:10.104849 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:27:10.104857 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:27:10.104869 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:27:10.104884 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:27:10.104911 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:27:10.104921 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:27:10.104930 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:27:10.104940 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:27:10.104950 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:27:10.104965 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:27:10.104974 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:27:10.104984 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:27:10.104993 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:27:10.105012 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:27:10.105022 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:27:10.105032 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:27:10.105045 kernel: No NUMA configuration found Dec 13 01:27:10.105055 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:27:10.105069 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:27:10.105080 kernel: Zone ranges: Dec 13 01:27:10.105090 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:27:10.105100 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:27:10.105111 kernel: Normal empty Dec 13 01:27:10.105121 kernel: Movable zone start for each node Dec 13 01:27:10.105131 kernel: Early memory node ranges Dec 13 01:27:10.105141 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:27:10.105152 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:27:10.105162 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:27:10.105183 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:27:10.105193 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:27:10.105202 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:27:10.105216 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:27:10.105226 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:27:10.105242 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:27:10.105253 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:27:10.105263 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:27:10.105274 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:27:10.105290 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:27:10.105301 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:27:10.105311 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:27:10.105321 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:27:10.105330 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:27:10.105340 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:27:10.105350 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:27:10.105359 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:27:10.105369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:27:10.105383 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:27:10.105392 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:27:10.105401 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:27:10.105410 kernel: TSC deadline timer available Dec 13 01:27:10.105420 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:27:10.105430 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:27:10.105440 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:27:10.105450 kernel: kvm-guest: setup PV sched yield Dec 13 01:27:10.105460 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:27:10.105474 kernel: Booting paravirtualized kernel on KVM Dec 13 01:27:10.105485 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:27:10.105500 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:27:10.105513 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:27:10.105532 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:27:10.105549 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:27:10.105569 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:27:10.105586 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:27:10.105614 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:27:10.105647 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:27:10.105658 kernel: random: crng init done Dec 13 01:27:10.105673 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:27:10.105685 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:27:10.105696 kernel: Fallback order for Node 0: 0 Dec 13 01:27:10.105707 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:27:10.105717 kernel: Policy zone: DMA32 Dec 13 01:27:10.105727 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:27:10.105737 kernel: Memory: 2395612K/2567000K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 171128K reserved, 0K cma-reserved) Dec 13 01:27:10.105752 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:27:10.105761 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:27:10.105771 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:27:10.105781 kernel: Dynamic Preempt: voluntary Dec 13 01:27:10.105802 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:27:10.105816 kernel: rcu: RCU event tracing is enabled. Dec 13 01:27:10.105826 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:27:10.105836 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:27:10.105847 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:27:10.105857 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:27:10.105981 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:27:10.105993 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:27:10.106023 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:27:10.106036 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:27:10.106044 kernel: Console: colour dummy device 80x25 Dec 13 01:27:10.106052 kernel: printk: console [ttyS0] enabled Dec 13 01:27:10.106063 kernel: ACPI: Core revision 20230628 Dec 13 01:27:10.106071 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:27:10.106082 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:27:10.106092 kernel: x2apic enabled Dec 13 01:27:10.106102 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:27:10.106113 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:27:10.106124 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:27:10.106135 kernel: kvm-guest: setup PV IPIs Dec 13 01:27:10.106147 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:27:10.106158 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:27:10.106173 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:27:10.106183 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:27:10.106194 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:27:10.106205 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:27:10.106216 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:27:10.106227 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:27:10.106240 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:27:10.106252 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:27:10.106270 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:27:10.106283 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:27:10.106300 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:27:10.106310 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:27:10.106321 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:27:10.106334 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:27:10.106345 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:27:10.106355 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:27:10.106366 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:27:10.106383 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:27:10.106394 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:27:10.106405 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:27:10.106417 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:27:10.106427 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:27:10.106435 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:27:10.106443 kernel: landlock: Up and running. Dec 13 01:27:10.106450 kernel: SELinux: Initializing. Dec 13 01:27:10.106458 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:27:10.106469 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:27:10.106477 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:27:10.106485 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:27:10.106493 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:27:10.106501 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:27:10.106509 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:27:10.106516 kernel: ... version: 0 Dec 13 01:27:10.106524 kernel: ... bit width: 48 Dec 13 01:27:10.106534 kernel: ... generic registers: 6 Dec 13 01:27:10.106541 kernel: ... value mask: 0000ffffffffffff Dec 13 01:27:10.106549 kernel: ... max period: 00007fffffffffff Dec 13 01:27:10.106559 kernel: ... fixed-purpose events: 0 Dec 13 01:27:10.106570 kernel: ... event mask: 000000000000003f Dec 13 01:27:10.106582 kernel: signal: max sigframe size: 1776 Dec 13 01:27:10.106594 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:27:10.106606 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:27:10.106619 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:27:10.106630 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:27:10.106643 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:27:10.106651 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:27:10.106662 kernel: smpboot: Max logical packages: 1 Dec 13 01:27:10.106674 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:27:10.106685 kernel: devtmpfs: initialized Dec 13 01:27:10.106696 kernel: x86/mm: Memory block size: 128MB Dec 13 01:27:10.106708 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:27:10.106720 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:27:10.106733 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:27:10.106750 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:27:10.106762 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:27:10.106774 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:27:10.106785 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:27:10.106796 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:27:10.106808 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:27:10.106820 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:27:10.106832 kernel: audit: type=2000 audit(1734053228.584:1): state=initialized audit_enabled=0 res=1 Dec 13 01:27:10.106847 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:27:10.106858 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:27:10.106869 kernel: cpuidle: using governor menu Dec 13 01:27:10.106880 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:27:10.106907 kernel: dca service started, version 1.12.1 Dec 13 01:27:10.106918 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:27:10.106928 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:27:10.106938 kernel: PCI: Using configuration type 1 for base access Dec 13 01:27:10.106949 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:27:10.106964 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:27:10.106975 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:27:10.106986 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:27:10.106997 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:27:10.107022 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:27:10.107033 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:27:10.107044 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:27:10.107054 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:27:10.107065 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:27:10.107085 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:27:10.107100 kernel: ACPI: Interpreter enabled Dec 13 01:27:10.107113 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:27:10.107126 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:27:10.107139 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:27:10.107150 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:27:10.107161 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:27:10.107171 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:27:10.107531 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:27:10.107722 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:27:10.107931 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:27:10.107952 kernel: PCI host bridge to bus 0000:00 Dec 13 01:27:10.108184 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:27:10.108359 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:27:10.108527 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:27:10.108699 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:27:10.108884 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:27:10.109146 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:27:10.109312 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:27:10.109544 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:27:10.109750 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:27:10.109958 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:27:10.110165 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:27:10.110409 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:27:10.110611 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:27:10.110799 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:27:10.111050 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:27:10.111245 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:27:10.111431 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:27:10.111628 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:27:10.111852 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:27:10.112079 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:27:10.112276 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:27:10.112455 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:27:10.112662 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:27:10.112868 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:27:10.113163 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:27:10.113354 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:27:10.113542 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:27:10.113762 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:27:10.114070 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:27:10.114314 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:27:10.114520 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:27:10.114714 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:27:10.114961 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:27:10.115171 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:27:10.115194 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:27:10.115209 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:27:10.115222 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:27:10.115234 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:27:10.115253 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:27:10.115265 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:27:10.115277 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:27:10.115289 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:27:10.115301 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:27:10.115313 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:27:10.115325 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:27:10.115336 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:27:10.115347 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:27:10.115365 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:27:10.115378 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:27:10.115390 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:27:10.115402 kernel: iommu: Default domain type: Translated Dec 13 01:27:10.115414 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:27:10.115426 kernel: efivars: Registered efivars operations Dec 13 01:27:10.115438 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:27:10.115449 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:27:10.115462 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:27:10.115479 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:27:10.115490 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:27:10.115502 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:27:10.115724 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:27:10.116019 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:27:10.116221 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:27:10.116242 kernel: vgaarb: loaded Dec 13 01:27:10.116255 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:27:10.116276 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:27:10.116288 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:27:10.116300 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:27:10.116314 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:27:10.116326 kernel: pnp: PnP ACPI init Dec 13 01:27:10.116580 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:27:10.116602 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:27:10.116615 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:27:10.116635 kernel: NET: Registered PF_INET protocol family Dec 13 01:27:10.116646 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:27:10.116658 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:27:10.116670 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:27:10.116682 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:27:10.116694 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:27:10.116706 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:27:10.116718 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:27:10.116730 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:27:10.116749 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:27:10.116761 kernel: NET: Registered PF_XDP protocol family Dec 13 01:27:10.117089 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:27:10.117285 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:27:10.117473 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:27:10.117652 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:27:10.117828 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:27:10.118062 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:27:10.118249 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:27:10.118434 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:27:10.118454 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:27:10.118467 kernel: Initialise system trusted keyrings Dec 13 01:27:10.118480 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:27:10.118492 kernel: Key type asymmetric registered Dec 13 01:27:10.118504 kernel: Asymmetric key parser 'x509' registered Dec 13 01:27:10.118516 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:27:10.118528 kernel: io scheduler mq-deadline registered Dec 13 01:27:10.118546 kernel: io scheduler kyber registered Dec 13 01:27:10.118558 kernel: io scheduler bfq registered Dec 13 01:27:10.118570 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:27:10.118584 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:27:10.118597 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:27:10.118608 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:27:10.118620 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:27:10.118632 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:27:10.118645 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:27:10.118664 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:27:10.118676 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:27:10.118992 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:27:10.119178 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:27:10.119310 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:27:09 UTC (1734053229) Dec 13 01:27:10.119468 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:27:10.119488 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 13 01:27:10.119500 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:27:10.119521 kernel: efifb: probing for efifb Dec 13 01:27:10.119533 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Dec 13 01:27:10.119545 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Dec 13 01:27:10.119557 kernel: efifb: scrolling: redraw Dec 13 01:27:10.119570 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Dec 13 01:27:10.119582 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 01:27:10.119624 kernel: fb0: EFI VGA frame buffer device Dec 13 01:27:10.119640 kernel: pstore: Using crash dump compression: deflate Dec 13 01:27:10.119651 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:27:10.119666 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:27:10.119678 kernel: Segment Routing with IPv6 Dec 13 01:27:10.119690 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:27:10.119702 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:27:10.119713 kernel: Key type dns_resolver registered Dec 13 01:27:10.119724 kernel: IPI shorthand broadcast: enabled Dec 13 01:27:10.119736 kernel: sched_clock: Marking stable (1137003996, 117132964)->(1281382841, -27245881) Dec 13 01:27:10.119748 kernel: registered taskstats version 1 Dec 13 01:27:10.119760 kernel: Loading compiled-in X.509 certificates Dec 13 01:27:10.119776 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:27:10.119787 kernel: Key type .fscrypt registered Dec 13 01:27:10.119799 kernel: Key type fscrypt-provisioning registered Dec 13 01:27:10.119811 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:27:10.119822 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:27:10.119831 kernel: ima: No architecture policies found Dec 13 01:27:10.119839 kernel: clk: Disabling unused clocks Dec 13 01:27:10.119847 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:27:10.119855 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:27:10.119867 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:27:10.119875 kernel: Run /init as init process Dec 13 01:27:10.119883 kernel: with arguments: Dec 13 01:27:10.119907 kernel: /init Dec 13 01:27:10.119915 kernel: with environment: Dec 13 01:27:10.119923 kernel: HOME=/ Dec 13 01:27:10.119931 kernel: TERM=linux Dec 13 01:27:10.119939 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:27:10.119958 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:27:10.119984 systemd[1]: Detected virtualization kvm. Dec 13 01:27:10.119997 systemd[1]: Detected architecture x86-64. Dec 13 01:27:10.120019 systemd[1]: Running in initrd. Dec 13 01:27:10.120048 systemd[1]: No hostname configured, using default hostname. Dec 13 01:27:10.120066 systemd[1]: Hostname set to . Dec 13 01:27:10.120078 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:27:10.120089 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:27:10.120100 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:27:10.120111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:27:10.120124 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:27:10.120136 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:27:10.120152 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:27:10.120163 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:27:10.120177 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:27:10.120189 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:27:10.120201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:27:10.120213 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:27:10.120225 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:27:10.120241 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:27:10.120253 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:27:10.120265 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:27:10.120277 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:27:10.120289 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:27:10.120300 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:27:10.120313 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:27:10.120326 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:27:10.120339 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:27:10.120358 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:27:10.120371 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:27:10.120383 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:27:10.120397 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:27:10.120410 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:27:10.120423 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:27:10.120437 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:27:10.120450 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:27:10.120469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:10.120482 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:27:10.120494 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:27:10.120506 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:27:10.120558 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 01:27:10.120597 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:27:10.120611 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:10.120624 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:10.120637 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:27:10.120656 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:27:10.120670 systemd-journald[193]: Journal started Dec 13 01:27:10.120697 systemd-journald[193]: Runtime Journal (/run/log/journal/6f3d7d849f8b4edf83c2a0e63cf70632) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:27:10.107570 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:27:10.123066 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:27:10.149376 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:27:10.152760 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:27:10.153265 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:10.157041 kernel: Bridge firewalling registered Dec 13 01:27:10.153606 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:27:10.158542 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:27:10.161780 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:27:10.165762 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:27:10.177288 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:27:10.180315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:27:10.200574 dracut-cmdline[224]: dracut-dracut-053 Dec 13 01:27:10.204086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:10.207020 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:27:10.216272 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:27:10.263682 systemd-resolved[240]: Positive Trust Anchors: Dec 13 01:27:10.263707 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:27:10.263753 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:27:10.267649 systemd-resolved[240]: Defaulting to hostname 'linux'. Dec 13 01:27:10.269507 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:27:10.275947 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:27:10.335008 kernel: SCSI subsystem initialized Dec 13 01:27:10.346932 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:27:10.361924 kernel: iscsi: registered transport (tcp) Dec 13 01:27:10.384925 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:27:10.384961 kernel: QLogic iSCSI HBA Driver Dec 13 01:27:10.451199 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:27:10.463270 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:27:10.488944 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:27:10.489069 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:27:10.489088 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:27:10.533932 kernel: raid6: avx2x4 gen() 29059 MB/s Dec 13 01:27:10.550929 kernel: raid6: avx2x2 gen() 30994 MB/s Dec 13 01:27:10.568021 kernel: raid6: avx2x1 gen() 25737 MB/s Dec 13 01:27:10.568058 kernel: raid6: using algorithm avx2x2 gen() 30994 MB/s Dec 13 01:27:10.586025 kernel: raid6: .... xor() 19833 MB/s, rmw enabled Dec 13 01:27:10.586063 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:27:10.606916 kernel: xor: automatically using best checksumming function avx Dec 13 01:27:10.796002 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:27:10.817294 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:27:10.828442 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:27:10.847916 systemd-udevd[412]: Using default interface naming scheme 'v255'. Dec 13 01:27:10.853401 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:27:10.874373 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:27:10.895433 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Dec 13 01:27:10.941460 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:27:10.951158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:27:11.040360 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:27:11.055197 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:27:11.073959 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:27:11.112090 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:27:11.096851 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:27:11.107554 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:27:11.109305 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:27:11.136314 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:27:11.142263 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:27:11.172954 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:27:11.173210 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:27:11.173229 kernel: libata version 3.00 loaded. Dec 13 01:27:11.173246 kernel: AES CTR mode by8 optimization enabled Dec 13 01:27:11.173262 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:27:11.173288 kernel: GPT:9289727 != 19775487 Dec 13 01:27:11.173314 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:27:11.173330 kernel: GPT:9289727 != 19775487 Dec 13 01:27:11.173345 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:27:11.173361 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:27:11.173377 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:27:11.211172 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:27:11.211204 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:27:11.211460 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:27:11.211700 kernel: scsi host0: ahci Dec 13 01:27:11.211986 kernel: scsi host1: ahci Dec 13 01:27:11.212238 kernel: scsi host2: ahci Dec 13 01:27:11.212539 kernel: scsi host3: ahci Dec 13 01:27:11.212809 kernel: scsi host4: ahci Dec 13 01:27:11.213088 kernel: scsi host5: ahci Dec 13 01:27:11.213326 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:27:11.213346 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:27:11.213363 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:27:11.213379 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:27:11.213394 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:27:11.213409 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:27:11.144499 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:27:11.144711 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:11.150515 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:11.153969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:11.154501 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:11.156134 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:11.168496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:11.170935 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:27:11.178030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:11.178182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:11.208700 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:11.236937 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (469) Dec 13 01:27:11.248917 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:11.257930 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (461) Dec 13 01:27:11.264311 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:27:11.274061 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:27:11.280704 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:27:11.291399 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:27:11.291586 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:27:11.305288 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:27:11.321760 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:11.339717 disk-uuid[568]: Primary Header is updated. Dec 13 01:27:11.339717 disk-uuid[568]: Secondary Entries is updated. Dec 13 01:27:11.339717 disk-uuid[568]: Secondary Header is updated. Dec 13 01:27:11.345931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:27:11.350416 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:11.355922 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:27:11.518443 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:27:11.518562 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:27:11.519598 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:27:11.521459 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:27:11.521487 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:27:11.523834 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:27:11.523958 kernel: ata3.00: applying bridge limits Dec 13 01:27:11.525607 kernel: ata3.00: configured for UDMA/100 Dec 13 01:27:11.529287 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:27:11.529404 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:27:11.578351 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:27:11.592055 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:27:11.592091 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:27:12.357142 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:27:12.358031 disk-uuid[575]: The operation has completed successfully. Dec 13 01:27:12.398350 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:27:12.398501 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:27:12.436327 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:27:12.443678 sh[596]: Success Dec 13 01:27:12.463973 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:27:12.516032 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:27:12.531609 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:27:12.535237 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:27:12.557464 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:27:12.557582 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:27:12.557604 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:27:12.558823 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:27:12.559801 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:27:12.569322 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:27:12.573135 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:27:12.590373 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:27:12.595301 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:27:12.606862 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:27:12.607000 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:27:12.607021 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:27:12.610946 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:27:12.623851 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:27:12.625885 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:27:12.639184 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:27:12.649298 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:27:12.733476 ignition[684]: Ignition 2.19.0 Dec 13 01:27:12.733939 ignition[684]: Stage: fetch-offline Dec 13 01:27:12.733994 ignition[684]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:12.734006 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:27:12.734122 ignition[684]: parsed url from cmdline: "" Dec 13 01:27:12.734127 ignition[684]: no config URL provided Dec 13 01:27:12.734134 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:27:12.734146 ignition[684]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:27:12.734179 ignition[684]: op(1): [started] loading QEMU firmware config module Dec 13 01:27:12.734185 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:27:12.744909 ignition[684]: op(1): [finished] loading QEMU firmware config module Dec 13 01:27:12.773672 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:27:12.792745 ignition[684]: parsing config with SHA512: 6fefd2a9d225b00a6d84a7e2fc839fcfe172b4704ab5b813fdd746f55890c033fb7016be777405fb1b420db2e892cbfa757a0a2c64b1ed8e4dc854e9cbb7e71b Dec 13 01:27:12.799297 unknown[684]: fetched base config from "system" Dec 13 01:27:12.799314 unknown[684]: fetched user config from "qemu" Dec 13 01:27:12.799727 ignition[684]: fetch-offline: fetch-offline passed Dec 13 01:27:12.799846 ignition[684]: Ignition finished successfully Dec 13 01:27:12.885247 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:27:12.886688 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:27:12.910220 systemd-networkd[785]: lo: Link UP Dec 13 01:27:12.910232 systemd-networkd[785]: lo: Gained carrier Dec 13 01:27:12.912017 systemd-networkd[785]: Enumeration completed Dec 13 01:27:12.912198 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:27:12.912485 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:27:12.912490 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:27:12.913659 systemd-networkd[785]: eth0: Link UP Dec 13 01:27:12.913664 systemd-networkd[785]: eth0: Gained carrier Dec 13 01:27:12.913673 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:27:12.913970 systemd[1]: Reached target network.target - Network. Dec 13 01:27:12.915463 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:27:12.926101 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:27:12.931061 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:27:12.950425 ignition[788]: Ignition 2.19.0 Dec 13 01:27:12.950442 ignition[788]: Stage: kargs Dec 13 01:27:12.950661 ignition[788]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:12.950678 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:27:12.951910 ignition[788]: kargs: kargs passed Dec 13 01:27:12.951988 ignition[788]: Ignition finished successfully Dec 13 01:27:12.956285 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:27:12.990171 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:27:13.004634 ignition[797]: Ignition 2.19.0 Dec 13 01:27:13.004651 ignition[797]: Stage: disks Dec 13 01:27:13.004932 ignition[797]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:13.004952 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:27:13.008118 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:27:13.006086 ignition[797]: disks: disks passed Dec 13 01:27:13.010955 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:27:13.006151 ignition[797]: Ignition finished successfully Dec 13 01:27:13.012526 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:27:13.014808 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:27:13.017349 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:27:13.019343 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:27:13.031282 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:27:13.051361 systemd-resolved[240]: Detected conflict on linux IN A 10.0.0.47 Dec 13 01:27:13.051389 systemd-resolved[240]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Dec 13 01:27:13.052480 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:27:13.061337 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:27:13.077152 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:27:13.171945 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:27:13.172943 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:27:13.174880 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:27:13.192105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:27:13.194575 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:27:13.197751 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:27:13.197818 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:27:13.197851 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:27:13.204467 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:27:13.206907 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:27:13.268366 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Dec 13 01:27:13.271116 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:27:13.271199 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:27:13.271213 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:27:13.274946 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:27:13.277791 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:27:13.288809 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:27:13.294314 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:27:13.299765 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:27:13.306108 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:27:13.412871 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:27:13.430066 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:27:13.432227 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:27:13.441927 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:27:13.463511 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:27:13.474863 ignition[930]: INFO : Ignition 2.19.0 Dec 13 01:27:13.474863 ignition[930]: INFO : Stage: mount Dec 13 01:27:13.476737 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:13.476737 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:27:13.476737 ignition[930]: INFO : mount: mount passed Dec 13 01:27:13.476737 ignition[930]: INFO : Ignition finished successfully Dec 13 01:27:13.478230 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:27:13.491123 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:27:13.555155 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:27:13.572161 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:27:13.585033 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Dec 13 01:27:13.585099 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:27:13.585120 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:27:13.586231 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:27:13.589933 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:27:13.592129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:27:13.882999 ignition[960]: INFO : Ignition 2.19.0 Dec 13 01:27:13.882999 ignition[960]: INFO : Stage: files Dec 13 01:27:13.885286 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:13.885286 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:27:13.885286 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:27:13.888707 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:27:13.888707 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:27:13.893820 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:27:13.895409 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:27:13.895409 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:27:13.894690 unknown[960]: wrote ssh authorized keys file for user: core Dec 13 01:27:13.900001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:27:13.900001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:27:13.966600 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:27:14.036215 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:27:14.036215 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:27:14.040268 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:27:14.094307 systemd-networkd[785]: eth0: Gained IPv6LL Dec 13 01:27:14.519683 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:27:14.690332 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:27:14.692660 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:27:14.694924 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:27:14.697016 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:27:14.699307 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:27:14.701413 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:27:14.703742 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:27:14.705907 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:27:14.708191 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:27:14.710625 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:27:14.713091 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:27:14.715338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:27:14.718566 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:27:14.721579 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:27:14.724378 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:27:15.135512 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:27:16.256873 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:27:16.256873 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:27:16.279066 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:27:16.279066 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:27:16.279066 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:27:16.279066 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:27:16.279066 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:27:16.279066 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:27:16.279066 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:27:16.279066 ignition[960]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:27:16.396509 ignition[960]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:27:16.406265 ignition[960]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:27:16.408654 ignition[960]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:27:16.408654 ignition[960]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:27:16.408654 ignition[960]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:27:16.408654 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:27:16.408654 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:27:16.408654 ignition[960]: INFO : files: files passed Dec 13 01:27:16.408654 ignition[960]: INFO : Ignition finished successfully Dec 13 01:27:16.416950 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:27:16.425595 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:27:16.439199 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:27:16.446264 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:27:16.446422 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:27:16.450473 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:27:16.456048 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:27:16.456048 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:27:16.460586 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:27:16.462871 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:27:16.466017 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:27:16.478289 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:27:16.514238 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:27:16.514464 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:27:16.517494 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:27:16.519929 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:27:16.520120 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:27:16.521546 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:27:16.550175 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:27:16.563317 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:27:16.578487 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:27:16.580309 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:27:16.583204 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:27:16.585907 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:27:16.586119 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:27:16.589151 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:27:16.591543 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:27:16.594240 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:27:16.596908 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:27:16.599585 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:27:16.602509 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:27:16.604418 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:27:16.604942 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:27:16.605355 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:27:16.605574 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:27:16.605785 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:27:16.605968 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:27:16.606657 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:27:16.607368 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:27:16.607744 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:27:16.607949 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:27:16.608406 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:27:16.608583 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:27:16.609426 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:27:16.609544 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:27:16.609927 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:27:16.610265 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:27:16.616158 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:27:16.616704 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:27:16.617274 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:27:16.617705 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:27:16.664488 ignition[1014]: INFO : Ignition 2.19.0 Dec 13 01:27:16.664488 ignition[1014]: INFO : Stage: umount Dec 13 01:27:16.664488 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:16.664488 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:27:16.664488 ignition[1014]: INFO : umount: umount passed Dec 13 01:27:16.664488 ignition[1014]: INFO : Ignition finished successfully Dec 13 01:27:16.617888 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:27:16.618339 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:27:16.618482 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:27:16.619044 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:27:16.619246 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:27:16.619726 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:27:16.619915 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:27:16.634526 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:27:16.637104 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:27:16.639271 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:27:16.639519 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:27:16.642346 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:27:16.642533 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:27:16.652347 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:27:16.652631 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:27:16.667250 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:27:16.667433 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:27:16.669703 systemd[1]: Stopped target network.target - Network. Dec 13 01:27:16.672220 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:27:16.672321 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:27:16.674654 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:27:16.674710 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:27:16.677261 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:27:16.677359 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:27:16.679662 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:27:16.679735 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:27:16.681436 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:27:16.683799 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:27:16.686103 systemd-networkd[785]: eth0: DHCPv6 lease lost Dec 13 01:27:16.687812 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:27:16.688642 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:27:16.688829 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:27:16.692418 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:27:16.692543 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:27:16.710435 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:27:16.711572 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:27:16.711675 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:27:16.713116 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:27:16.713997 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:27:16.714168 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:27:16.721760 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:27:16.722444 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:16.724391 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:27:16.724498 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:27:16.727674 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:27:16.727842 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:27:16.735614 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:27:16.735786 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:27:16.739923 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:27:16.740155 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:27:16.743364 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:27:16.743475 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:27:16.745041 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:27:16.745096 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:27:16.749659 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:27:16.749790 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:27:16.752455 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:27:16.752535 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:27:16.755123 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:27:16.755191 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:16.780369 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:27:16.782947 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:27:16.783080 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:27:16.785976 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:27:16.786091 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:27:16.788635 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:27:16.788763 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:27:16.791052 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:16.791147 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:16.794372 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:27:16.794531 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:27:16.950511 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:27:16.950685 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:27:16.956439 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:27:16.958009 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:27:16.958170 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:27:16.980232 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:27:16.993465 systemd[1]: Switching root. Dec 13 01:27:17.035474 systemd-journald[193]: Journal stopped Dec 13 01:27:18.457048 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 01:27:18.457130 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:27:18.457145 kernel: SELinux: policy capability open_perms=1 Dec 13 01:27:18.457163 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:27:18.457175 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:27:18.457186 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:27:18.457197 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:27:18.457213 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:27:18.457238 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:27:18.457250 kernel: audit: type=1403 audit(1734053237.523:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:27:18.457263 systemd[1]: Successfully loaded SELinux policy in 59.600ms. Dec 13 01:27:18.457294 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.177ms. Dec 13 01:27:18.457307 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:27:18.457320 systemd[1]: Detected virtualization kvm. Dec 13 01:27:18.457332 systemd[1]: Detected architecture x86-64. Dec 13 01:27:18.457345 systemd[1]: Detected first boot. Dec 13 01:27:18.457360 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:27:18.457373 zram_generator::config[1058]: No configuration found. Dec 13 01:27:18.457386 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:27:18.457399 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:27:18.457411 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:27:18.457423 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:27:18.457436 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:27:18.457449 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:27:18.457463 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:27:18.457475 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:27:18.457488 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:27:18.457501 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:27:18.457513 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:27:18.457528 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:27:18.457540 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:27:18.457553 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:27:18.457566 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:27:18.457581 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:27:18.457594 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:27:18.457606 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:27:18.457618 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:27:18.457631 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:27:18.457643 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:27:18.457655 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:27:18.457668 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:27:18.457684 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:27:18.457696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:27:18.457708 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:27:18.457720 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:27:18.457733 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:27:18.457745 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:27:18.457757 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:27:18.457769 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:27:18.457791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:27:18.457807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:27:18.457820 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:27:18.457832 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:27:18.457845 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:27:18.457857 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:27:18.457869 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:27:18.457881 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:27:18.457906 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:27:18.457918 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:27:18.457935 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:27:18.457947 systemd[1]: Reached target machines.target - Containers. Dec 13 01:27:18.457959 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:27:18.457971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:27:18.457983 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:27:18.457996 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:27:18.458008 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:27:18.458020 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:27:18.458035 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:27:18.458048 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:27:18.458060 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:27:18.458073 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:27:18.458086 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:27:18.458098 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:27:18.458110 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:27:18.458122 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:27:18.458137 kernel: fuse: init (API version 7.39) Dec 13 01:27:18.458149 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:27:18.458161 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:27:18.458173 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:27:18.458185 kernel: loop: module loaded Dec 13 01:27:18.458197 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:27:18.458209 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:27:18.458221 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:27:18.458233 systemd[1]: Stopped verity-setup.service. Dec 13 01:27:18.458246 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:27:18.458260 kernel: ACPI: bus type drm_connector registered Dec 13 01:27:18.458272 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:27:18.458302 systemd-journald[1142]: Collecting audit messages is disabled. Dec 13 01:27:18.458327 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:27:18.458341 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:27:18.458353 systemd-journald[1142]: Journal started Dec 13 01:27:18.458375 systemd-journald[1142]: Runtime Journal (/run/log/journal/6f3d7d849f8b4edf83c2a0e63cf70632) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:27:18.198189 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:27:18.217454 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:27:18.217988 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:27:18.461376 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:27:18.462253 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:27:18.463591 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:27:18.464881 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:27:18.466300 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:27:18.467857 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:27:18.469484 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:27:18.469701 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:27:18.471488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:27:18.471673 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:27:18.473301 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:27:18.473491 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:27:18.475235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:27:18.475461 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:27:18.477675 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:27:18.477926 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:27:18.479962 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:27:18.480200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:27:18.482148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:27:18.484066 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:27:18.485788 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:27:18.507987 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:27:18.529170 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:27:18.532426 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:27:18.533740 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:27:18.533780 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:27:18.542245 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:27:18.544872 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:27:18.549125 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:27:18.550590 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:27:18.553654 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:27:18.556993 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:27:18.559118 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:27:18.560375 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:27:18.561525 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:27:18.562698 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:27:18.567028 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:27:18.573330 systemd-journald[1142]: Time spent on flushing to /var/log/journal/6f3d7d849f8b4edf83c2a0e63cf70632 is 21.048ms for 1000 entries. Dec 13 01:27:18.573330 systemd-journald[1142]: System Journal (/var/log/journal/6f3d7d849f8b4edf83c2a0e63cf70632) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:27:18.909590 systemd-journald[1142]: Received client request to flush runtime journal. Dec 13 01:27:18.909669 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:27:18.909698 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:27:18.571864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:27:18.578959 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:27:18.580844 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:27:18.582223 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:27:18.583882 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:27:18.591025 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:27:18.611614 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:27:18.871147 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:18.873502 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:27:18.875249 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:27:18.884271 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:27:18.896529 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Dec 13 01:27:18.896552 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Dec 13 01:27:18.906973 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:27:18.966947 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:27:18.968929 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:27:18.976370 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:27:18.977197 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:27:18.992937 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:27:19.012357 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:27:19.058042 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:27:19.072936 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 01:27:19.090522 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Dec 13 01:27:19.090551 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Dec 13 01:27:19.099824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:27:19.134924 kernel: loop3: detected capacity change from 0 to 211296 Dec 13 01:27:19.146957 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:27:19.161919 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 01:27:19.173115 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:27:19.175014 (sd-merge)[1199]: Merged extensions into '/usr'. Dec 13 01:27:19.180912 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:27:19.181054 systemd[1]: Reloading... Dec 13 01:27:19.263926 zram_generator::config[1224]: No configuration found. Dec 13 01:27:19.442001 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:27:19.455665 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:19.517015 systemd[1]: Reloading finished in 335 ms. Dec 13 01:27:19.585564 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:27:19.587356 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:27:19.603294 systemd[1]: Starting ensure-sysext.service... Dec 13 01:27:19.605869 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:27:19.613667 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:27:19.613690 systemd[1]: Reloading... Dec 13 01:27:19.647288 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:27:19.647662 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:27:19.648678 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:27:19.649037 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Dec 13 01:27:19.649116 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Dec 13 01:27:19.655562 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:27:19.655579 systemd-tmpfiles[1263]: Skipping /boot Dec 13 01:27:19.672244 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:27:19.672293 systemd-tmpfiles[1263]: Skipping /boot Dec 13 01:27:19.717929 zram_generator::config[1292]: No configuration found. Dec 13 01:27:19.844035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:19.907487 systemd[1]: Reloading finished in 293 ms. Dec 13 01:27:19.927561 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:27:19.939659 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:27:19.949610 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:27:19.952395 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:27:19.955000 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:27:19.960800 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:27:19.967963 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:27:19.970533 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:27:19.972988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:27:19.973165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:27:19.978108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:27:19.986044 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:27:19.990050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:27:19.991303 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:27:19.994349 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:27:19.996000 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:27:19.997082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:27:19.997353 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:27:20.000364 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:27:20.000532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:27:20.003431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:27:20.003720 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:27:20.011548 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:27:20.013545 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Dec 13 01:27:20.021163 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:27:20.027480 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:27:20.027876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:27:20.029958 augenrules[1358]: No rules Dec 13 01:27:20.037455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:27:20.041261 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:27:20.058242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:27:20.064204 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:27:20.065451 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:27:20.067645 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:27:20.068808 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:27:20.069827 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:27:20.071677 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:27:20.073412 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:27:20.075245 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:27:20.077045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:27:20.077249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:27:20.079191 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:27:20.079384 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:27:20.088177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:27:20.088384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:27:20.094534 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:27:20.094762 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:27:20.110201 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:27:20.112981 systemd[1]: Finished ensure-sysext.service. Dec 13 01:27:20.115926 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373) Dec 13 01:27:20.118917 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1373) Dec 13 01:27:20.134907 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:27:20.143215 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:27:20.144407 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:27:20.144528 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:27:20.149172 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:27:20.150384 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:27:20.155343 systemd-resolved[1332]: Positive Trust Anchors: Dec 13 01:27:20.155366 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:27:20.155399 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:27:20.163454 systemd-resolved[1332]: Defaulting to hostname 'linux'. Dec 13 01:27:20.170870 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1385) Dec 13 01:27:20.230645 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:27:20.232267 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:27:20.287301 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:27:20.289594 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:27:20.295132 systemd-networkd[1406]: lo: Link UP Dec 13 01:27:20.295146 systemd-networkd[1406]: lo: Gained carrier Dec 13 01:27:20.296948 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:27:20.297474 systemd-networkd[1406]: Enumeration completed Dec 13 01:27:20.298745 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:27:20.298762 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:27:20.299477 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:27:20.301255 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:27:20.302704 systemd[1]: Reached target network.target - Network. Dec 13 01:27:20.302981 systemd-networkd[1406]: eth0: Link UP Dec 13 01:27:20.302990 systemd-networkd[1406]: eth0: Gained carrier Dec 13 01:27:20.303024 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:27:20.304576 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:27:20.313022 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:27:20.314001 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Dec 13 01:27:20.314768 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:27:20.314820 systemd-timesyncd[1407]: Initial clock synchronization to Fri 2024-12-13 01:27:20.394976 UTC. Dec 13 01:27:20.324116 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:27:20.329519 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:27:20.345642 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:27:20.350501 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:27:20.354089 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:27:20.354281 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:27:20.354485 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:27:20.365326 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:27:20.435953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:20.446484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:20.446995 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:20.451682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:20.459937 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:27:20.471936 kernel: kvm_amd: TSC scaling supported Dec 13 01:27:20.472005 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:27:20.472047 kernel: kvm_amd: Nested Paging enabled Dec 13 01:27:20.472932 kernel: kvm_amd: LBR virtualization supported Dec 13 01:27:20.472951 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:27:20.473957 kernel: kvm_amd: Virtual GIF supported Dec 13 01:27:20.494945 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:27:20.609602 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:27:20.619151 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:27:20.624636 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:20.629563 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:27:20.667552 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:27:20.669311 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:27:20.670605 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:27:20.672005 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:27:20.673367 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:27:20.675132 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:27:20.676413 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:27:20.677811 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:27:20.679151 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:27:20.679190 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:27:20.680207 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:27:20.682358 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:27:20.685542 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:27:20.699778 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:27:20.702916 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:27:20.704614 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:27:20.705848 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:27:20.706872 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:27:20.707948 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:27:20.707982 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:27:20.709472 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:27:20.711934 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:27:20.715049 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:27:20.717038 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:27:20.720906 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:27:20.722419 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:27:20.725227 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:27:20.729074 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:27:20.731779 jq[1436]: false Dec 13 01:27:20.733979 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:27:20.738164 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:27:20.744139 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:27:20.748395 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:27:20.749204 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:27:20.752343 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:27:20.757036 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:27:20.759935 extend-filesystems[1437]: Found loop3 Dec 13 01:27:20.759935 extend-filesystems[1437]: Found loop4 Dec 13 01:27:20.759935 extend-filesystems[1437]: Found loop5 Dec 13 01:27:20.759935 extend-filesystems[1437]: Found sr0 Dec 13 01:27:20.759935 extend-filesystems[1437]: Found vda Dec 13 01:27:20.759935 extend-filesystems[1437]: Found vda1 Dec 13 01:27:20.759935 extend-filesystems[1437]: Found vda2 Dec 13 01:27:20.759935 extend-filesystems[1437]: Found vda3 Dec 13 01:27:20.759935 extend-filesystems[1437]: Found usr Dec 13 01:27:20.759935 extend-filesystems[1437]: Found vda4 Dec 13 01:27:20.759935 extend-filesystems[1437]: Found vda6 Dec 13 01:27:20.759935 extend-filesystems[1437]: Found vda7 Dec 13 01:27:20.759935 extend-filesystems[1437]: Found vda9 Dec 13 01:27:20.785096 extend-filesystems[1437]: Checking size of /dev/vda9 Dec 13 01:27:20.768010 dbus-daemon[1435]: [system] SELinux support is enabled Dec 13 01:27:20.761389 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:27:20.776403 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:27:20.784502 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:27:20.787344 jq[1446]: true Dec 13 01:27:20.784822 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:27:20.788700 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:27:20.789058 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:27:20.799676 extend-filesystems[1437]: Resized partition /dev/vda9 Dec 13 01:27:20.801732 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:27:20.802266 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:27:20.806193 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:27:20.813849 update_engine[1444]: I20241213 01:27:20.813232 1444 main.cc:92] Flatcar Update Engine starting Dec 13 01:27:20.814609 jq[1457]: true Dec 13 01:27:20.816424 update_engine[1444]: I20241213 01:27:20.816210 1444 update_check_scheduler.cc:74] Next update check in 4m27s Dec 13 01:27:20.820067 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1387) Dec 13 01:27:20.821964 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:27:20.828087 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:27:20.846054 tar[1452]: linux-amd64/helm Dec 13 01:27:20.851305 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:27:20.853547 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:27:20.853590 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:27:20.855573 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:27:20.855612 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:27:20.871245 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:27:20.923995 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:27:20.924035 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:27:20.924928 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:27:20.925398 systemd-logind[1443]: New seat seat0. Dec 13 01:27:21.049492 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:27:21.048554 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:27:21.097670 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:27:21.099160 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:27:21.111510 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:27:21.126071 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:27:21.126429 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:27:21.130481 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:27:21.256749 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:27:21.282205 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:27:21.282205 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:27:21.282205 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:27:21.268360 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:27:21.289261 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Dec 13 01:27:21.273708 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:27:21.275157 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:27:21.286386 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:27:21.286632 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:27:21.318107 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:27:21.321705 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:27:21.324885 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:27:21.588729 containerd[1462]: time="2024-12-13T01:27:21.588264983Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:27:21.617432 containerd[1462]: time="2024-12-13T01:27:21.617350732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:21.620009 containerd[1462]: time="2024-12-13T01:27:21.619970809Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:27:21.620009 containerd[1462]: time="2024-12-13T01:27:21.619997522Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:27:21.620082 containerd[1462]: time="2024-12-13T01:27:21.620013422Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:27:21.620263 containerd[1462]: time="2024-12-13T01:27:21.620235878Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:27:21.620263 containerd[1462]: time="2024-12-13T01:27:21.620258976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:21.620361 containerd[1462]: time="2024-12-13T01:27:21.620342529Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:27:21.620390 containerd[1462]: time="2024-12-13T01:27:21.620358801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:21.620602 containerd[1462]: time="2024-12-13T01:27:21.620577975Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:27:21.620602 containerd[1462]: time="2024-12-13T01:27:21.620596753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:21.620667 containerd[1462]: time="2024-12-13T01:27:21.620610065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:27:21.620667 containerd[1462]: time="2024-12-13T01:27:21.620619973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:21.620761 containerd[1462]: time="2024-12-13T01:27:21.620741879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:21.621057 containerd[1462]: time="2024-12-13T01:27:21.621035685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:21.621197 containerd[1462]: time="2024-12-13T01:27:21.621176381Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:27:21.621197 containerd[1462]: time="2024-12-13T01:27:21.621194052Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:27:21.621323 containerd[1462]: time="2024-12-13T01:27:21.621305577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:27:21.621406 containerd[1462]: time="2024-12-13T01:27:21.621391939Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:27:21.628371 containerd[1462]: time="2024-12-13T01:27:21.628334614Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:27:21.628420 containerd[1462]: time="2024-12-13T01:27:21.628384386Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:27:21.628420 containerd[1462]: time="2024-12-13T01:27:21.628408934Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:27:21.628470 containerd[1462]: time="2024-12-13T01:27:21.628425165Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:27:21.628470 containerd[1462]: time="2024-12-13T01:27:21.628441890Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:27:21.628622 containerd[1462]: time="2024-12-13T01:27:21.628586925Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:27:21.629713 containerd[1462]: time="2024-12-13T01:27:21.629677407Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:27:21.629852 containerd[1462]: time="2024-12-13T01:27:21.629826540Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:27:21.629852 containerd[1462]: time="2024-12-13T01:27:21.629845953Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:27:21.629925 containerd[1462]: time="2024-12-13T01:27:21.629859094Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:27:21.629925 containerd[1462]: time="2024-12-13T01:27:21.629872244Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:27:21.629977 containerd[1462]: time="2024-12-13T01:27:21.629926868Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:27:21.629977 containerd[1462]: time="2024-12-13T01:27:21.629946463Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:27:21.629977 containerd[1462]: time="2024-12-13T01:27:21.629960147Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:27:21.629977 containerd[1462]: time="2024-12-13T01:27:21.629973529Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:27:21.630130 containerd[1462]: time="2024-12-13T01:27:21.629986196Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:27:21.630130 containerd[1462]: time="2024-12-13T01:27:21.629998460Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:27:21.630130 containerd[1462]: time="2024-12-13T01:27:21.630011720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:27:21.630130 containerd[1462]: time="2024-12-13T01:27:21.630035302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630130 containerd[1462]: time="2024-12-13T01:27:21.630058925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630130 containerd[1462]: time="2024-12-13T01:27:21.630070837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630130 containerd[1462]: time="2024-12-13T01:27:21.630093210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630130 containerd[1462]: time="2024-12-13T01:27:21.630107558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630130 containerd[1462]: time="2024-12-13T01:27:21.630120024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630130 containerd[1462]: time="2024-12-13T01:27:21.630131412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630144280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630156232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630169866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630181677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630195461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630207474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630224138Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630247297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630269771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630282287Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630351995Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630372214Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:27:21.630394 containerd[1462]: time="2024-12-13T01:27:21.630383038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:27:21.630742 containerd[1462]: time="2024-12-13T01:27:21.630394638Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:27:21.630742 containerd[1462]: time="2024-12-13T01:27:21.630404495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630742 containerd[1462]: time="2024-12-13T01:27:21.630418451Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:27:21.630742 containerd[1462]: time="2024-12-13T01:27:21.630433736Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:27:21.630742 containerd[1462]: time="2024-12-13T01:27:21.630444803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:27:21.630887 containerd[1462]: time="2024-12-13T01:27:21.630774343Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:27:21.630887 containerd[1462]: time="2024-12-13T01:27:21.630843558Z" level=info msg="Connect containerd service" Dec 13 01:27:21.630887 containerd[1462]: time="2024-12-13T01:27:21.630879666Z" level=info msg="using legacy CRI server" Dec 13 01:27:21.630887 containerd[1462]: time="2024-12-13T01:27:21.630886573Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:27:21.631152 containerd[1462]: time="2024-12-13T01:27:21.631017491Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:27:21.631696 containerd[1462]: time="2024-12-13T01:27:21.631664953Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:27:21.632030 containerd[1462]: time="2024-12-13T01:27:21.631910116Z" level=info msg="Start subscribing containerd event" Dec 13 01:27:21.632030 containerd[1462]: time="2024-12-13T01:27:21.631987063Z" level=info msg="Start recovering state" Dec 13 01:27:21.632118 containerd[1462]: time="2024-12-13T01:27:21.632106926Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:27:21.632400 containerd[1462]: time="2024-12-13T01:27:21.632159436Z" level=info msg="Start event monitor" Dec 13 01:27:21.632400 containerd[1462]: time="2024-12-13T01:27:21.632182968Z" level=info msg="Start snapshots syncer" Dec 13 01:27:21.632400 containerd[1462]: time="2024-12-13T01:27:21.632194769Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:27:21.632400 containerd[1462]: time="2024-12-13T01:27:21.632204868Z" level=info msg="Start streaming server" Dec 13 01:27:21.632519 containerd[1462]: time="2024-12-13T01:27:21.632406502Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:27:21.633091 containerd[1462]: time="2024-12-13T01:27:21.633057276Z" level=info msg="containerd successfully booted in 0.076906s" Dec 13 01:27:21.633152 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:27:21.837762 tar[1452]: linux-amd64/LICENSE Dec 13 01:27:21.837922 tar[1452]: linux-amd64/README.md Dec 13 01:27:21.868752 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:27:22.223104 systemd-networkd[1406]: eth0: Gained IPv6LL Dec 13 01:27:22.227448 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:27:22.230012 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:27:22.243333 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:27:22.247094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:22.250328 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:27:22.276866 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:27:22.277242 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:27:22.279474 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:27:22.282680 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:27:23.022433 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:27:23.035387 systemd[1]: Started sshd@0-10.0.0.47:22-10.0.0.1:34914.service - OpenSSH per-connection server daemon (10.0.0.1:34914). Dec 13 01:27:23.100967 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 34914 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:23.104396 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:23.319419 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:27:23.337198 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:27:23.340641 systemd-logind[1443]: New session 1 of user core. Dec 13 01:27:23.356995 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:27:23.368322 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:27:23.375205 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:27:23.592425 systemd[1547]: Queued start job for default target default.target. Dec 13 01:27:23.605571 systemd[1547]: Created slice app.slice - User Application Slice. Dec 13 01:27:23.605605 systemd[1547]: Reached target paths.target - Paths. Dec 13 01:27:23.605619 systemd[1547]: Reached target timers.target - Timers. Dec 13 01:27:23.607575 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:27:23.622264 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:27:23.622438 systemd[1547]: Reached target sockets.target - Sockets. Dec 13 01:27:23.622455 systemd[1547]: Reached target basic.target - Basic System. Dec 13 01:27:23.622512 systemd[1547]: Reached target default.target - Main User Target. Dec 13 01:27:23.622555 systemd[1547]: Startup finished in 233ms. Dec 13 01:27:23.622890 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:27:23.625823 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:27:23.649032 systemd[1]: Started sshd@1-10.0.0.47:22-10.0.0.1:34924.service - OpenSSH per-connection server daemon (10.0.0.1:34924). Dec 13 01:27:23.740023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:23.742189 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:27:23.743610 systemd[1]: Startup finished in 1.325s (kernel) + 7.733s (initrd) + 6.276s (userspace) = 15.335s. Dec 13 01:27:23.744392 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 34924 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:23.746483 (kubelet)[1565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:23.746402 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:23.752785 systemd-logind[1443]: New session 2 of user core. Dec 13 01:27:23.753633 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:27:23.812844 sshd[1558]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:23.829172 systemd[1]: sshd@1-10.0.0.47:22-10.0.0.1:34924.service: Deactivated successfully. Dec 13 01:27:23.831247 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:27:23.833329 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:27:23.839325 systemd[1]: Started sshd@2-10.0.0.47:22-10.0.0.1:34938.service - OpenSSH per-connection server daemon (10.0.0.1:34938). Dec 13 01:27:23.841270 systemd-logind[1443]: Removed session 2. Dec 13 01:27:23.871438 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 34938 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:23.873653 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:23.878667 systemd-logind[1443]: New session 3 of user core. Dec 13 01:27:23.888244 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:27:24.017559 sshd[1575]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:24.025836 systemd[1]: sshd@2-10.0.0.47:22-10.0.0.1:34938.service: Deactivated successfully. Dec 13 01:27:24.027926 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:27:24.029512 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:27:24.034187 systemd[1]: Started sshd@3-10.0.0.47:22-10.0.0.1:34946.service - OpenSSH per-connection server daemon (10.0.0.1:34946). Dec 13 01:27:24.035248 systemd-logind[1443]: Removed session 3. Dec 13 01:27:24.067364 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 34946 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:24.069474 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:24.073528 systemd-logind[1443]: New session 4 of user core. Dec 13 01:27:24.081051 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:27:24.141170 sshd[1587]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:24.163138 systemd[1]: sshd@3-10.0.0.47:22-10.0.0.1:34946.service: Deactivated successfully. Dec 13 01:27:24.165657 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:27:24.167972 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:27:24.173303 systemd[1]: Started sshd@4-10.0.0.47:22-10.0.0.1:34948.service - OpenSSH per-connection server daemon (10.0.0.1:34948). Dec 13 01:27:24.174516 systemd-logind[1443]: Removed session 4. Dec 13 01:27:24.212826 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 34948 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:24.215458 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:24.220916 systemd-logind[1443]: New session 5 of user core. Dec 13 01:27:24.232081 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:27:24.430015 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:27:24.430420 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:24.450576 sudo[1599]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:24.453399 sshd[1594]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:24.468530 systemd[1]: sshd@4-10.0.0.47:22-10.0.0.1:34948.service: Deactivated successfully. Dec 13 01:27:24.472097 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:27:24.474241 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:27:24.480422 systemd[1]: Started sshd@5-10.0.0.47:22-10.0.0.1:34958.service - OpenSSH per-connection server daemon (10.0.0.1:34958). Dec 13 01:27:24.481339 systemd-logind[1443]: Removed session 5. Dec 13 01:27:24.511501 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 34958 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:24.513107 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:24.517091 systemd-logind[1443]: New session 6 of user core. Dec 13 01:27:24.532036 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:27:24.592794 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:27:24.593203 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:24.597778 sudo[1609]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:24.607149 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:27:24.607690 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:24.625092 kubelet[1565]: E1213 01:27:24.624958 1565 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:24.628247 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:27:24.630403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:24.630636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:24.630874 auditctl[1612]: No rules Dec 13 01:27:24.631121 systemd[1]: kubelet.service: Consumed 2.201s CPU time. Dec 13 01:27:24.631717 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:27:24.631966 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:27:24.634910 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:27:24.674784 augenrules[1631]: No rules Dec 13 01:27:24.676064 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:27:24.677408 sudo[1608]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:24.679380 sshd[1605]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:24.695026 systemd[1]: sshd@5-10.0.0.47:22-10.0.0.1:34958.service: Deactivated successfully. Dec 13 01:27:24.696931 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:27:24.698593 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:27:24.707213 systemd[1]: Started sshd@6-10.0.0.47:22-10.0.0.1:34968.service - OpenSSH per-connection server daemon (10.0.0.1:34968). Dec 13 01:27:24.708365 systemd-logind[1443]: Removed session 6. Dec 13 01:27:24.738042 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 34968 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:27:24.739807 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:24.745388 systemd-logind[1443]: New session 7 of user core. Dec 13 01:27:24.759080 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:27:24.817595 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:27:24.818147 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:25.409152 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:27:25.409327 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:27:26.002092 dockerd[1661]: time="2024-12-13T01:27:26.001980653Z" level=info msg="Starting up" Dec 13 01:27:26.778162 dockerd[1661]: time="2024-12-13T01:27:26.778073302Z" level=info msg="Loading containers: start." Dec 13 01:27:26.921942 kernel: Initializing XFRM netlink socket Dec 13 01:27:27.010052 systemd-networkd[1406]: docker0: Link UP Dec 13 01:27:27.036858 dockerd[1661]: time="2024-12-13T01:27:27.036704164Z" level=info msg="Loading containers: done." Dec 13 01:27:27.056979 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck380634222-merged.mount: Deactivated successfully. Dec 13 01:27:27.059394 dockerd[1661]: time="2024-12-13T01:27:27.059332893Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:27:27.059510 dockerd[1661]: time="2024-12-13T01:27:27.059485376Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:27:27.059680 dockerd[1661]: time="2024-12-13T01:27:27.059654104Z" level=info msg="Daemon has completed initialization" Dec 13 01:27:27.099469 dockerd[1661]: time="2024-12-13T01:27:27.099365748Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:27:27.099673 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:27:28.081878 containerd[1462]: time="2024-12-13T01:27:28.081671435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:27:28.850424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3943842793.mount: Deactivated successfully. Dec 13 01:27:30.698237 containerd[1462]: time="2024-12-13T01:27:30.698147740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:30.698946 containerd[1462]: time="2024-12-13T01:27:30.698830459Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:27:30.700243 containerd[1462]: time="2024-12-13T01:27:30.700195404Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:30.704991 containerd[1462]: time="2024-12-13T01:27:30.704926221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:30.706715 containerd[1462]: time="2024-12-13T01:27:30.706653769Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.624906333s" Dec 13 01:27:30.706799 containerd[1462]: time="2024-12-13T01:27:30.706717456Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:27:30.770844 containerd[1462]: time="2024-12-13T01:27:30.770780543Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:27:33.241708 containerd[1462]: time="2024-12-13T01:27:33.241609290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:33.242641 containerd[1462]: time="2024-12-13T01:27:33.242559953Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:27:33.244456 containerd[1462]: time="2024-12-13T01:27:33.244395687Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:33.248030 containerd[1462]: time="2024-12-13T01:27:33.247984698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:33.249664 containerd[1462]: time="2024-12-13T01:27:33.249618929Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.478778596s" Dec 13 01:27:33.249664 containerd[1462]: time="2024-12-13T01:27:33.249660683Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:27:33.333505 containerd[1462]: time="2024-12-13T01:27:33.333446710Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:27:34.842164 containerd[1462]: time="2024-12-13T01:27:34.842056729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:34.842964 containerd[1462]: time="2024-12-13T01:27:34.842868998Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:27:34.844280 containerd[1462]: time="2024-12-13T01:27:34.844241708Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:34.848316 containerd[1462]: time="2024-12-13T01:27:34.848227556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:34.849696 containerd[1462]: time="2024-12-13T01:27:34.849639564Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.516146985s" Dec 13 01:27:34.849696 containerd[1462]: time="2024-12-13T01:27:34.849690449Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:27:34.880394 containerd[1462]: time="2024-12-13T01:27:34.880343818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:27:34.881070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:27:34.898336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:35.142414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:35.148183 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:35.416352 kubelet[1904]: E1213 01:27:35.416138 1904 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:35.424805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:35.425068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:36.460133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616815887.mount: Deactivated successfully. Dec 13 01:27:37.397163 containerd[1462]: time="2024-12-13T01:27:37.397081279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:37.398029 containerd[1462]: time="2024-12-13T01:27:37.397978610Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:27:37.399136 containerd[1462]: time="2024-12-13T01:27:37.399103262Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:37.401372 containerd[1462]: time="2024-12-13T01:27:37.401320097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:37.402143 containerd[1462]: time="2024-12-13T01:27:37.402097681Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.521699119s" Dec 13 01:27:37.402177 containerd[1462]: time="2024-12-13T01:27:37.402140627Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:27:37.432100 containerd[1462]: time="2024-12-13T01:27:37.432056190Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:27:38.045567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3512370339.mount: Deactivated successfully. Dec 13 01:27:40.009813 containerd[1462]: time="2024-12-13T01:27:40.009712299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:40.049722 containerd[1462]: time="2024-12-13T01:27:40.049597168Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:27:40.081516 containerd[1462]: time="2024-12-13T01:27:40.081415095Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:40.143449 containerd[1462]: time="2024-12-13T01:27:40.143358745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:40.157292 containerd[1462]: time="2024-12-13T01:27:40.144606175Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.712510364s" Dec 13 01:27:40.157292 containerd[1462]: time="2024-12-13T01:27:40.144674052Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:27:40.182442 containerd[1462]: time="2024-12-13T01:27:40.182365728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:27:41.179118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912612887.mount: Deactivated successfully. Dec 13 01:27:41.185708 containerd[1462]: time="2024-12-13T01:27:41.185634188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:41.186441 containerd[1462]: time="2024-12-13T01:27:41.186386595Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:27:41.187740 containerd[1462]: time="2024-12-13T01:27:41.187698692Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:41.190980 containerd[1462]: time="2024-12-13T01:27:41.190188939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:41.192549 containerd[1462]: time="2024-12-13T01:27:41.192511070Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.010095747s" Dec 13 01:27:41.192777 containerd[1462]: time="2024-12-13T01:27:41.192582713Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:27:41.222469 containerd[1462]: time="2024-12-13T01:27:41.222413164Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:27:42.176431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815084025.mount: Deactivated successfully. Dec 13 01:27:44.932080 containerd[1462]: time="2024-12-13T01:27:44.931989847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:44.964168 containerd[1462]: time="2024-12-13T01:27:44.964098705Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:27:45.023793 containerd[1462]: time="2024-12-13T01:27:45.023717228Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:45.057496 containerd[1462]: time="2024-12-13T01:27:45.057433476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:45.058630 containerd[1462]: time="2024-12-13T01:27:45.058588151Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.836134949s" Dec 13 01:27:45.058630 containerd[1462]: time="2024-12-13T01:27:45.058617807Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:27:45.441108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:27:45.450147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:45.598188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:45.604319 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:45.663496 kubelet[2073]: E1213 01:27:45.663439 2073 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:45.669102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:45.669317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:48.341246 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:48.351176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:48.372365 systemd[1]: Reloading requested from client PID 2132 ('systemctl') (unit session-7.scope)... Dec 13 01:27:48.372383 systemd[1]: Reloading... Dec 13 01:27:48.462983 zram_generator::config[2171]: No configuration found. Dec 13 01:27:48.856161 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:48.934631 systemd[1]: Reloading finished in 561 ms. Dec 13 01:27:48.982610 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:27:48.982728 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:27:48.983052 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:48.985919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:49.138938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:49.144470 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:49.204176 kubelet[2220]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:49.204176 kubelet[2220]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:49.204176 kubelet[2220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:49.204719 kubelet[2220]: I1213 01:27:49.204213 2220 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:49.614385 kubelet[2220]: I1213 01:27:49.614222 2220 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:49.614385 kubelet[2220]: I1213 01:27:49.614267 2220 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:49.614574 kubelet[2220]: I1213 01:27:49.614551 2220 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:49.635807 kubelet[2220]: E1213 01:27:49.635749 2220 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:49.636246 kubelet[2220]: I1213 01:27:49.636212 2220 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:49.651237 kubelet[2220]: I1213 01:27:49.651156 2220 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:49.652702 kubelet[2220]: I1213 01:27:49.652650 2220 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:49.654212 kubelet[2220]: I1213 01:27:49.653242 2220 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:49.654212 kubelet[2220]: I1213 01:27:49.653332 2220 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:49.654212 kubelet[2220]: I1213 01:27:49.653354 2220 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:49.654212 kubelet[2220]: I1213 01:27:49.653579 2220 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:49.654212 kubelet[2220]: I1213 01:27:49.653791 2220 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:49.654212 kubelet[2220]: I1213 01:27:49.653819 2220 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:49.654212 kubelet[2220]: I1213 01:27:49.653882 2220 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:49.654636 kubelet[2220]: I1213 01:27:49.653933 2220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:49.658243 kubelet[2220]: W1213 01:27:49.658165 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:49.658243 kubelet[2220]: E1213 01:27:49.658238 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:49.658400 kubelet[2220]: I1213 01:27:49.658352 2220 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:49.660362 kubelet[2220]: W1213 01:27:49.660318 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:49.660362 kubelet[2220]: E1213 01:27:49.660360 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:49.664356 kubelet[2220]: I1213 01:27:49.664303 2220 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:49.664527 kubelet[2220]: W1213 01:27:49.664450 2220 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:27:49.666435 kubelet[2220]: I1213 01:27:49.666391 2220 server.go:1256] "Started kubelet" Dec 13 01:27:49.666725 kubelet[2220]: I1213 01:27:49.666693 2220 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:49.667342 kubelet[2220]: I1213 01:27:49.666768 2220 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:49.667342 kubelet[2220]: I1213 01:27:49.667310 2220 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:49.670389 kubelet[2220]: I1213 01:27:49.668656 2220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:49.670389 kubelet[2220]: I1213 01:27:49.668733 2220 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:49.670389 kubelet[2220]: I1213 01:27:49.670039 2220 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:49.670389 kubelet[2220]: I1213 01:27:49.670163 2220 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:49.671749 kubelet[2220]: E1213 01:27:49.671719 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="200ms" Dec 13 01:27:49.672811 kubelet[2220]: I1213 01:27:49.672644 2220 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:49.672811 kubelet[2220]: E1213 01:27:49.672765 2220 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:49.673456 kubelet[2220]: W1213 01:27:49.672985 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:49.673456 kubelet[2220]: E1213 01:27:49.673036 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:49.673456 kubelet[2220]: I1213 01:27:49.673456 2220 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:49.673573 kubelet[2220]: I1213 01:27:49.673533 2220 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:49.674353 kubelet[2220]: I1213 01:27:49.674331 2220 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:49.674803 kubelet[2220]: E1213 01:27:49.674764 2220 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.47:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.47:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18109842e81144b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:27:49.666350263 +0000 UTC m=+0.517347728,LastTimestamp:2024-12-13 01:27:49.666350263 +0000 UTC m=+0.517347728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:27:49.688082 kubelet[2220]: I1213 01:27:49.688040 2220 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:49.688082 kubelet[2220]: I1213 01:27:49.688066 2220 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:49.688082 kubelet[2220]: I1213 01:27:49.688090 2220 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:49.691840 kubelet[2220]: I1213 01:27:49.691773 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:49.694291 kubelet[2220]: I1213 01:27:49.694249 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:49.694291 kubelet[2220]: I1213 01:27:49.694299 2220 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:49.694475 kubelet[2220]: I1213 01:27:49.694323 2220 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:49.694475 kubelet[2220]: E1213 01:27:49.694376 2220 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:49.695917 kubelet[2220]: W1213 01:27:49.694942 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:49.695917 kubelet[2220]: E1213 01:27:49.694998 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:49.773355 kubelet[2220]: I1213 01:27:49.773322 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:49.773848 kubelet[2220]: E1213 01:27:49.773814 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Dec 13 01:27:49.795268 kubelet[2220]: E1213 01:27:49.795149 2220 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:49.873522 kubelet[2220]: E1213 01:27:49.873315 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="400ms" Dec 13 01:27:49.975963 kubelet[2220]: I1213 01:27:49.975880 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:49.976517 kubelet[2220]: E1213 01:27:49.976470 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Dec 13 01:27:49.995793 kubelet[2220]: E1213 01:27:49.995644 2220 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:50.121815 kubelet[2220]: I1213 01:27:50.121739 2220 policy_none.go:49] "None policy: Start" Dec 13 01:27:50.123308 kubelet[2220]: I1213 01:27:50.123240 2220 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:50.123308 kubelet[2220]: I1213 01:27:50.123293 2220 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:50.274608 kubelet[2220]: E1213 01:27:50.274457 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="800ms" Dec 13 01:27:50.379384 kubelet[2220]: I1213 01:27:50.379318 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:50.380002 kubelet[2220]: E1213 01:27:50.379963 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Dec 13 01:27:50.396054 kubelet[2220]: E1213 01:27:50.395989 2220 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:50.644711 kubelet[2220]: W1213 01:27:50.644560 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:50.644711 kubelet[2220]: E1213 01:27:50.644617 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:50.748036 kubelet[2220]: W1213 01:27:50.747957 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:50.748036 kubelet[2220]: E1213 01:27:50.748016 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:50.759393 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:27:50.778271 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:27:50.781868 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:27:50.798426 kubelet[2220]: I1213 01:27:50.798255 2220 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:50.798783 kubelet[2220]: I1213 01:27:50.798627 2220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:50.800004 kubelet[2220]: E1213 01:27:50.799978 2220 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:27:50.928747 kubelet[2220]: W1213 01:27:50.928564 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:50.928747 kubelet[2220]: E1213 01:27:50.928630 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:50.980119 kubelet[2220]: W1213 01:27:50.980039 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:50.980119 kubelet[2220]: E1213 01:27:50.980095 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:51.075341 kubelet[2220]: E1213 01:27:51.075243 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="1.6s" Dec 13 01:27:51.182526 kubelet[2220]: I1213 01:27:51.182373 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:51.182795 kubelet[2220]: E1213 01:27:51.182752 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Dec 13 01:27:51.197228 kubelet[2220]: I1213 01:27:51.197132 2220 topology_manager.go:215] "Topology Admit Handler" podUID="6ec11f220e42ccb6ae3ace57e79ac18d" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:27:51.280556 kubelet[2220]: I1213 01:27:51.280470 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ec11f220e42ccb6ae3ace57e79ac18d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ec11f220e42ccb6ae3ace57e79ac18d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:51.280556 kubelet[2220]: I1213 01:27:51.280527 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ec11f220e42ccb6ae3ace57e79ac18d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ec11f220e42ccb6ae3ace57e79ac18d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:51.280556 kubelet[2220]: I1213 01:27:51.280553 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ec11f220e42ccb6ae3ace57e79ac18d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ec11f220e42ccb6ae3ace57e79ac18d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:27:51.309743 kubelet[2220]: I1213 01:27:51.309683 2220 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:27:51.311764 kubelet[2220]: I1213 01:27:51.311678 2220 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:27:51.322667 systemd[1]: Created slice kubepods-burstable-pod6ec11f220e42ccb6ae3ace57e79ac18d.slice - libcontainer container kubepods-burstable-pod6ec11f220e42ccb6ae3ace57e79ac18d.slice. Dec 13 01:27:51.335075 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 01:27:51.354872 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 01:27:51.381366 kubelet[2220]: I1213 01:27:51.381275 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:51.381366 kubelet[2220]: I1213 01:27:51.381353 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:51.381366 kubelet[2220]: I1213 01:27:51.381383 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:51.381649 kubelet[2220]: I1213 01:27:51.381416 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:51.381649 kubelet[2220]: I1213 01:27:51.381497 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:27:51.381649 kubelet[2220]: I1213 01:27:51.381610 2220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:27:51.632428 kubelet[2220]: E1213 01:27:51.632251 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:51.633285 containerd[1462]: time="2024-12-13T01:27:51.633227095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ec11f220e42ccb6ae3ace57e79ac18d,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:51.652735 kubelet[2220]: E1213 01:27:51.652692 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:51.653458 containerd[1462]: time="2024-12-13T01:27:51.653417934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:51.659092 kubelet[2220]: E1213 01:27:51.659038 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:51.661975 containerd[1462]: time="2024-12-13T01:27:51.661919358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:51.790925 kubelet[2220]: E1213 01:27:51.790857 2220 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:52.468138 kubelet[2220]: W1213 01:27:52.468035 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:52.468138 kubelet[2220]: E1213 01:27:52.468125 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:52.669882 kubelet[2220]: W1213 01:27:52.669775 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:52.669882 kubelet[2220]: E1213 01:27:52.669867 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:52.676770 kubelet[2220]: E1213 01:27:52.676696 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="3.2s" Dec 13 01:27:52.775430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758843591.mount: Deactivated successfully. Dec 13 01:27:52.785126 kubelet[2220]: I1213 01:27:52.785068 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:52.785465 kubelet[2220]: E1213 01:27:52.785432 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Dec 13 01:27:53.000208 containerd[1462]: time="2024-12-13T01:27:53.000109718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:53.002412 containerd[1462]: time="2024-12-13T01:27:53.002335739Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:53.003401 containerd[1462]: time="2024-12-13T01:27:53.003330616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:27:53.005321 containerd[1462]: time="2024-12-13T01:27:53.005267689Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:53.006335 containerd[1462]: time="2024-12-13T01:27:53.006252280Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:53.007533 containerd[1462]: time="2024-12-13T01:27:53.007488003Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:53.008882 containerd[1462]: time="2024-12-13T01:27:53.008803414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:53.011668 containerd[1462]: time="2024-12-13T01:27:53.011545919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:53.014540 containerd[1462]: time="2024-12-13T01:27:53.014473298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.360964419s" Dec 13 01:27:53.015441 containerd[1462]: time="2024-12-13T01:27:53.015383893Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.382031243s" Dec 13 01:27:53.018984 containerd[1462]: time="2024-12-13T01:27:53.018916411Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.356881125s" Dec 13 01:27:53.249987 kubelet[2220]: W1213 01:27:53.249760 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:53.249987 kubelet[2220]: E1213 01:27:53.249858 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:53.299456 kubelet[2220]: W1213 01:27:53.299342 2220 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:53.299456 kubelet[2220]: E1213 01:27:53.299429 2220 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Dec 13 01:27:53.447920 containerd[1462]: time="2024-12-13T01:27:53.446125228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:53.447920 containerd[1462]: time="2024-12-13T01:27:53.446180016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:53.447920 containerd[1462]: time="2024-12-13T01:27:53.446198953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:53.447920 containerd[1462]: time="2024-12-13T01:27:53.446432409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:53.447920 containerd[1462]: time="2024-12-13T01:27:53.445607358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:53.447920 containerd[1462]: time="2024-12-13T01:27:53.447304859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:53.447920 containerd[1462]: time="2024-12-13T01:27:53.447385342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:53.447920 containerd[1462]: time="2024-12-13T01:27:53.447773817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:53.450797 containerd[1462]: time="2024-12-13T01:27:53.450539399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:53.450797 containerd[1462]: time="2024-12-13T01:27:53.450596652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:53.450797 containerd[1462]: time="2024-12-13T01:27:53.450607459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:53.450797 containerd[1462]: time="2024-12-13T01:27:53.450714398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:53.493752 systemd[1]: Started cri-containerd-34706b6bad96f7e900b2e6b5746e215eca7c48bb5c06c1d364550d4e02f0b937.scope - libcontainer container 34706b6bad96f7e900b2e6b5746e215eca7c48bb5c06c1d364550d4e02f0b937. Dec 13 01:27:53.500965 systemd[1]: Started cri-containerd-be8a468644d6a689f7e40b054d256e295cd3651d5d14a64c9417bfbb1b90a3e3.scope - libcontainer container be8a468644d6a689f7e40b054d256e295cd3651d5d14a64c9417bfbb1b90a3e3. Dec 13 01:27:53.504353 systemd[1]: Started cri-containerd-c57dd5051d903b932a1c8a332964bd15f8bf1ad434a816b78a22b341bd1bc145.scope - libcontainer container c57dd5051d903b932a1c8a332964bd15f8bf1ad434a816b78a22b341bd1bc145. Dec 13 01:27:53.635734 kubelet[2220]: E1213 01:27:53.635638 2220 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.47:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.47:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18109842e81144b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:27:49.666350263 +0000 UTC m=+0.517347728,LastTimestamp:2024-12-13 01:27:49.666350263 +0000 UTC m=+0.517347728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:27:53.687996 containerd[1462]: time="2024-12-13T01:27:53.687281143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ec11f220e42ccb6ae3ace57e79ac18d,Namespace:kube-system,Attempt:0,} returns sandbox id \"be8a468644d6a689f7e40b054d256e295cd3651d5d14a64c9417bfbb1b90a3e3\"" Dec 13 01:27:53.687996 containerd[1462]: time="2024-12-13T01:27:53.687609979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"34706b6bad96f7e900b2e6b5746e215eca7c48bb5c06c1d364550d4e02f0b937\"" Dec 13 01:27:53.690839 kubelet[2220]: E1213 01:27:53.690790 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:53.691994 kubelet[2220]: E1213 01:27:53.691951 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:53.692089 containerd[1462]: time="2024-12-13T01:27:53.692015157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c57dd5051d903b932a1c8a332964bd15f8bf1ad434a816b78a22b341bd1bc145\"" Dec 13 01:27:53.692596 kubelet[2220]: E1213 01:27:53.692567 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:53.694787 containerd[1462]: time="2024-12-13T01:27:53.694748277Z" level=info msg="CreateContainer within sandbox \"34706b6bad96f7e900b2e6b5746e215eca7c48bb5c06c1d364550d4e02f0b937\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:27:53.697231 containerd[1462]: time="2024-12-13T01:27:53.697193014Z" level=info msg="CreateContainer within sandbox \"c57dd5051d903b932a1c8a332964bd15f8bf1ad434a816b78a22b341bd1bc145\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:27:53.697384 containerd[1462]: time="2024-12-13T01:27:53.697200103Z" level=info msg="CreateContainer within sandbox \"be8a468644d6a689f7e40b054d256e295cd3651d5d14a64c9417bfbb1b90a3e3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:27:53.732098 containerd[1462]: time="2024-12-13T01:27:53.732017864Z" level=info msg="CreateContainer within sandbox \"c57dd5051d903b932a1c8a332964bd15f8bf1ad434a816b78a22b341bd1bc145\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2b0d0d41d466fc9777a8e3753bc3f9509dca76771073d95c2691bab2948d714d\"" Dec 13 01:27:53.733166 containerd[1462]: time="2024-12-13T01:27:53.733052402Z" level=info msg="StartContainer for \"2b0d0d41d466fc9777a8e3753bc3f9509dca76771073d95c2691bab2948d714d\"" Dec 13 01:27:53.735125 containerd[1462]: time="2024-12-13T01:27:53.734795637Z" level=info msg="CreateContainer within sandbox \"34706b6bad96f7e900b2e6b5746e215eca7c48bb5c06c1d364550d4e02f0b937\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7d5086748aee61bc097f2ec5057e718f47c6c59911ffa83551340381cd4b498a\"" Dec 13 01:27:53.736681 containerd[1462]: time="2024-12-13T01:27:53.735881683Z" level=info msg="StartContainer for \"7d5086748aee61bc097f2ec5057e718f47c6c59911ffa83551340381cd4b498a\"" Dec 13 01:27:53.737820 containerd[1462]: time="2024-12-13T01:27:53.737210218Z" level=info msg="CreateContainer within sandbox \"be8a468644d6a689f7e40b054d256e295cd3651d5d14a64c9417bfbb1b90a3e3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"455d111b7f6d45dc4fbc33b368ded973c6f936c66a454da6a3a5e1e098994950\"" Dec 13 01:27:53.738188 containerd[1462]: time="2024-12-13T01:27:53.738161185Z" level=info msg="StartContainer for \"455d111b7f6d45dc4fbc33b368ded973c6f936c66a454da6a3a5e1e098994950\"" Dec 13 01:27:53.815334 systemd[1]: Started cri-containerd-2b0d0d41d466fc9777a8e3753bc3f9509dca76771073d95c2691bab2948d714d.scope - libcontainer container 2b0d0d41d466fc9777a8e3753bc3f9509dca76771073d95c2691bab2948d714d. Dec 13 01:27:53.825177 systemd[1]: Started cri-containerd-7d5086748aee61bc097f2ec5057e718f47c6c59911ffa83551340381cd4b498a.scope - libcontainer container 7d5086748aee61bc097f2ec5057e718f47c6c59911ffa83551340381cd4b498a. Dec 13 01:27:53.838219 systemd[1]: Started cri-containerd-455d111b7f6d45dc4fbc33b368ded973c6f936c66a454da6a3a5e1e098994950.scope - libcontainer container 455d111b7f6d45dc4fbc33b368ded973c6f936c66a454da6a3a5e1e098994950. Dec 13 01:27:53.946620 containerd[1462]: time="2024-12-13T01:27:53.946539009Z" level=info msg="StartContainer for \"455d111b7f6d45dc4fbc33b368ded973c6f936c66a454da6a3a5e1e098994950\" returns successfully" Dec 13 01:27:53.946798 containerd[1462]: time="2024-12-13T01:27:53.946563831Z" level=info msg="StartContainer for \"2b0d0d41d466fc9777a8e3753bc3f9509dca76771073d95c2691bab2948d714d\" returns successfully" Dec 13 01:27:53.946798 containerd[1462]: time="2024-12-13T01:27:53.946576955Z" level=info msg="StartContainer for \"7d5086748aee61bc097f2ec5057e718f47c6c59911ffa83551340381cd4b498a\" returns successfully" Dec 13 01:27:54.710871 kubelet[2220]: E1213 01:27:54.710834 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:54.716006 kubelet[2220]: E1213 01:27:54.715849 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:54.716855 kubelet[2220]: E1213 01:27:54.716694 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:55.721322 kubelet[2220]: E1213 01:27:55.720676 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:55.722566 kubelet[2220]: E1213 01:27:55.721205 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:55.725932 kubelet[2220]: E1213 01:27:55.725263 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:55.880400 kubelet[2220]: E1213 01:27:55.880340 2220 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:27:55.974573 kubelet[2220]: E1213 01:27:55.974465 2220 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:27:55.986717 kubelet[2220]: I1213 01:27:55.986692 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:27:56.029226 kubelet[2220]: I1213 01:27:56.029183 2220 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:27:56.660550 kubelet[2220]: I1213 01:27:56.660509 2220 apiserver.go:52] "Watching apiserver" Dec 13 01:27:56.672714 kubelet[2220]: I1213 01:27:56.672695 2220 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:56.729489 kubelet[2220]: E1213 01:27:56.728851 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:56.729489 kubelet[2220]: E1213 01:27:56.729248 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:57.515666 kubelet[2220]: E1213 01:27:57.515627 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:57.720982 kubelet[2220]: E1213 01:27:57.720713 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:57.720982 kubelet[2220]: E1213 01:27:57.720886 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:57.721180 kubelet[2220]: E1213 01:27:57.721006 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:27:59.268630 systemd[1]: Reloading requested from client PID 2498 ('systemctl') (unit session-7.scope)... Dec 13 01:27:59.268646 systemd[1]: Reloading... Dec 13 01:27:59.362936 zram_generator::config[2538]: No configuration found. Dec 13 01:27:59.498111 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:59.608015 systemd[1]: Reloading finished in 338 ms. Dec 13 01:27:59.659648 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:59.677311 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:27:59.677566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:59.677611 systemd[1]: kubelet.service: Consumed 1.151s CPU time, 115.1M memory peak, 0B memory swap peak. Dec 13 01:27:59.688292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:59.859143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:59.865484 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:59.926760 kubelet[2582]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:59.926760 kubelet[2582]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:59.926760 kubelet[2582]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:59.927354 kubelet[2582]: I1213 01:27:59.926817 2582 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:59.933263 kubelet[2582]: I1213 01:27:59.933231 2582 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:59.933263 kubelet[2582]: I1213 01:27:59.933258 2582 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:59.933572 kubelet[2582]: I1213 01:27:59.933552 2582 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:59.935631 kubelet[2582]: I1213 01:27:59.935583 2582 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:27:59.952285 sudo[2597]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:27:59.952749 sudo[2597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:27:59.956084 kubelet[2582]: I1213 01:27:59.956034 2582 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:59.985947 kubelet[2582]: I1213 01:27:59.985339 2582 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:59.985947 kubelet[2582]: I1213 01:27:59.985659 2582 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:59.985947 kubelet[2582]: I1213 01:27:59.985826 2582 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:59.985947 kubelet[2582]: I1213 01:27:59.985853 2582 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:59.985947 kubelet[2582]: I1213 01:27:59.985862 2582 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:59.986191 kubelet[2582]: I1213 01:27:59.986176 2582 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:59.986345 kubelet[2582]: I1213 01:27:59.986332 2582 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:59.986420 kubelet[2582]: I1213 01:27:59.986408 2582 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:59.986495 kubelet[2582]: I1213 01:27:59.986484 2582 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:59.986626 kubelet[2582]: I1213 01:27:59.986614 2582 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:59.988326 kubelet[2582]: I1213 01:27:59.988301 2582 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:59.988657 kubelet[2582]: I1213 01:27:59.988633 2582 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:59.989762 kubelet[2582]: I1213 01:27:59.989746 2582 server.go:1256] "Started kubelet" Dec 13 01:27:59.994693 kubelet[2582]: I1213 01:27:59.994677 2582 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:59.996594 kubelet[2582]: I1213 01:27:59.996551 2582 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:28:00.003953 kubelet[2582]: I1213 01:27:59.993835 2582 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:28:00.006099 kubelet[2582]: I1213 01:28:00.004973 2582 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:28:00.006099 kubelet[2582]: I1213 01:28:00.005310 2582 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:28:00.006099 kubelet[2582]: I1213 01:28:00.005836 2582 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:28:00.006195 kubelet[2582]: I1213 01:28:00.006052 2582 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:28:00.006392 kubelet[2582]: I1213 01:28:00.006361 2582 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:28:00.009388 kubelet[2582]: E1213 01:28:00.009352 2582 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:28:00.009751 kubelet[2582]: I1213 01:28:00.009724 2582 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:28:00.009883 kubelet[2582]: I1213 01:28:00.009853 2582 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:28:00.013722 kubelet[2582]: I1213 01:28:00.013698 2582 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:28:00.022383 kubelet[2582]: I1213 01:28:00.022354 2582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:28:00.024610 kubelet[2582]: I1213 01:28:00.024182 2582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:28:00.024610 kubelet[2582]: I1213 01:28:00.024221 2582 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:28:00.024610 kubelet[2582]: I1213 01:28:00.024249 2582 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:28:00.024610 kubelet[2582]: E1213 01:28:00.024324 2582 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:28:00.062959 kubelet[2582]: I1213 01:28:00.062924 2582 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:28:00.063162 kubelet[2582]: I1213 01:28:00.063152 2582 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:28:00.063310 kubelet[2582]: I1213 01:28:00.063298 2582 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:28:00.063557 kubelet[2582]: I1213 01:28:00.063544 2582 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:28:00.063636 kubelet[2582]: I1213 01:28:00.063626 2582 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:28:00.063690 kubelet[2582]: I1213 01:28:00.063679 2582 policy_none.go:49] "None policy: Start" Dec 13 01:28:00.064389 kubelet[2582]: I1213 01:28:00.064371 2582 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:28:00.064595 kubelet[2582]: I1213 01:28:00.064544 2582 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:28:00.065542 kubelet[2582]: I1213 01:28:00.064822 2582 state_mem.go:75] "Updated machine memory state" Dec 13 01:28:00.070384 kubelet[2582]: I1213 01:28:00.070343 2582 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:28:00.071199 kubelet[2582]: I1213 01:28:00.071172 2582 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:28:00.125577 kubelet[2582]: I1213 01:28:00.125418 2582 topology_manager.go:215] "Topology Admit Handler" podUID="6ec11f220e42ccb6ae3ace57e79ac18d" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:28:00.125577 kubelet[2582]: I1213 01:28:00.125551 2582 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:28:00.125577 kubelet[2582]: I1213 01:28:00.125588 2582 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:28:00.180329 kubelet[2582]: I1213 01:28:00.180041 2582 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:28:00.240076 kubelet[2582]: E1213 01:28:00.240026 2582 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:28:00.241328 kubelet[2582]: E1213 01:28:00.241286 2582 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 01:28:00.241503 kubelet[2582]: E1213 01:28:00.241357 2582 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:28:00.245952 kubelet[2582]: I1213 01:28:00.245880 2582 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:28:00.246043 kubelet[2582]: I1213 01:28:00.246010 2582 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:28:00.307733 kubelet[2582]: I1213 01:28:00.307668 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:28:00.307733 kubelet[2582]: I1213 01:28:00.307726 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:28:00.307733 kubelet[2582]: I1213 01:28:00.307751 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ec11f220e42ccb6ae3ace57e79ac18d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ec11f220e42ccb6ae3ace57e79ac18d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:28:00.308060 kubelet[2582]: I1213 01:28:00.307771 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ec11f220e42ccb6ae3ace57e79ac18d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ec11f220e42ccb6ae3ace57e79ac18d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:28:00.308060 kubelet[2582]: I1213 01:28:00.307792 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:28:00.308060 kubelet[2582]: I1213 01:28:00.307812 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:28:00.308060 kubelet[2582]: I1213 01:28:00.307830 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:28:00.308060 kubelet[2582]: I1213 01:28:00.307848 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ec11f220e42ccb6ae3ace57e79ac18d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ec11f220e42ccb6ae3ace57e79ac18d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:28:00.308213 kubelet[2582]: I1213 01:28:00.307865 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:28:00.473165 sudo[2597]: pam_unix(sudo:session): session closed for user root Dec 13 01:28:00.541844 kubelet[2582]: E1213 01:28:00.541801 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:00.542451 kubelet[2582]: E1213 01:28:00.542415 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:00.542524 kubelet[2582]: E1213 01:28:00.542484 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:00.987561 kubelet[2582]: I1213 01:28:00.987498 2582 apiserver.go:52] "Watching apiserver" Dec 13 01:28:01.006290 kubelet[2582]: I1213 01:28:01.006238 2582 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:28:01.040689 kubelet[2582]: E1213 01:28:01.040573 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:01.040689 kubelet[2582]: E1213 01:28:01.040618 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:01.040689 kubelet[2582]: E1213 01:28:01.040556 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:01.142039 kubelet[2582]: I1213 01:28:01.141983 2582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.141909953 podStartE2EDuration="4.141909953s" podCreationTimestamp="2024-12-13 01:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:01.130087482 +0000 UTC m=+1.259383478" watchObservedRunningTime="2024-12-13 01:28:01.141909953 +0000 UTC m=+1.271205959" Dec 13 01:28:01.142268 kubelet[2582]: I1213 01:28:01.142114 2582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.142093397 podStartE2EDuration="5.142093397s" podCreationTimestamp="2024-12-13 01:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:01.142038303 +0000 UTC m=+1.271334319" watchObservedRunningTime="2024-12-13 01:28:01.142093397 +0000 UTC m=+1.271389403" Dec 13 01:28:01.151711 kubelet[2582]: I1213 01:28:01.151424 2582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.151377877 podStartE2EDuration="5.151377877s" podCreationTimestamp="2024-12-13 01:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:01.151271528 +0000 UTC m=+1.280567524" watchObservedRunningTime="2024-12-13 01:28:01.151377877 +0000 UTC m=+1.280673873" Dec 13 01:28:02.042034 kubelet[2582]: E1213 01:28:02.042001 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:02.269585 sudo[1642]: pam_unix(sudo:session): session closed for user root Dec 13 01:28:02.272119 sshd[1639]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:02.276207 systemd[1]: sshd@6-10.0.0.47:22-10.0.0.1:34968.service: Deactivated successfully. Dec 13 01:28:02.278167 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:28:02.278353 systemd[1]: session-7.scope: Consumed 6.246s CPU time, 191.9M memory peak, 0B memory swap peak. Dec 13 01:28:02.278834 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:28:02.279942 systemd-logind[1443]: Removed session 7. Dec 13 01:28:02.296795 kubelet[2582]: E1213 01:28:02.296711 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:05.550153 kubelet[2582]: E1213 01:28:05.550095 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:06.050247 kubelet[2582]: E1213 01:28:06.050003 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:06.412464 update_engine[1444]: I20241213 01:28:06.412266 1444 update_attempter.cc:509] Updating boot flags... Dec 13 01:28:06.692925 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2667) Dec 13 01:28:06.740917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2670) Dec 13 01:28:06.786924 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2670) Dec 13 01:28:10.266374 kubelet[2582]: E1213 01:28:10.266317 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:11.058958 kubelet[2582]: E1213 01:28:11.058858 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:12.108266 kubelet[2582]: I1213 01:28:12.108206 2582 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:28:12.108829 containerd[1462]: time="2024-12-13T01:28:12.108728832Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:28:12.109219 kubelet[2582]: I1213 01:28:12.109060 2582 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:28:12.301063 kubelet[2582]: E1213 01:28:12.300977 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:12.686286 kubelet[2582]: I1213 01:28:12.686237 2582 topology_manager.go:215] "Topology Admit Handler" podUID="57cc2170-d1ba-4ea7-9939-d69b04e26c2d" podNamespace="kube-system" podName="kube-proxy-5f998" Dec 13 01:28:12.691566 kubelet[2582]: W1213 01:28:12.691513 2582 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:28:12.691849 kubelet[2582]: E1213 01:28:12.691810 2582 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:28:12.691960 kubelet[2582]: W1213 01:28:12.691928 2582 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:28:12.692092 kubelet[2582]: E1213 01:28:12.691955 2582 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:28:12.692920 kubelet[2582]: I1213 01:28:12.692445 2582 topology_manager.go:215] "Topology Admit Handler" podUID="74de468a-fe4b-48a9-9e21-580c7909b725" podNamespace="kube-system" podName="cilium-s8tsj" Dec 13 01:28:12.707993 systemd[1]: Created slice kubepods-besteffort-pod57cc2170_d1ba_4ea7_9939_d69b04e26c2d.slice - libcontainer container kubepods-besteffort-pod57cc2170_d1ba_4ea7_9939_d69b04e26c2d.slice. Dec 13 01:28:12.725068 systemd[1]: Created slice kubepods-burstable-pod74de468a_fe4b_48a9_9e21_580c7909b725.slice - libcontainer container kubepods-burstable-pod74de468a_fe4b_48a9_9e21_580c7909b725.slice. Dec 13 01:28:12.791304 kubelet[2582]: I1213 01:28:12.791209 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57cc2170-d1ba-4ea7-9939-d69b04e26c2d-lib-modules\") pod \"kube-proxy-5f998\" (UID: \"57cc2170-d1ba-4ea7-9939-d69b04e26c2d\") " pod="kube-system/kube-proxy-5f998" Dec 13 01:28:12.791304 kubelet[2582]: I1213 01:28:12.791281 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-hostproc\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791304 kubelet[2582]: I1213 01:28:12.791311 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74de468a-fe4b-48a9-9e21-580c7909b725-hubble-tls\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791304 kubelet[2582]: I1213 01:28:12.791341 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57cc2170-d1ba-4ea7-9939-d69b04e26c2d-xtables-lock\") pod \"kube-proxy-5f998\" (UID: \"57cc2170-d1ba-4ea7-9939-d69b04e26c2d\") " pod="kube-system/kube-proxy-5f998" Dec 13 01:28:12.791304 kubelet[2582]: I1213 01:28:12.791374 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cni-path\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791304 kubelet[2582]: I1213 01:28:12.791395 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74de468a-fe4b-48a9-9e21-580c7909b725-clustermesh-secrets\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791750 kubelet[2582]: I1213 01:28:12.791415 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-run\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791750 kubelet[2582]: I1213 01:28:12.791433 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-lib-modules\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791750 kubelet[2582]: I1213 01:28:12.791455 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57cc2170-d1ba-4ea7-9939-d69b04e26c2d-kube-proxy\") pod \"kube-proxy-5f998\" (UID: \"57cc2170-d1ba-4ea7-9939-d69b04e26c2d\") " pod="kube-system/kube-proxy-5f998" Dec 13 01:28:12.791750 kubelet[2582]: I1213 01:28:12.791473 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-etc-cni-netd\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791750 kubelet[2582]: I1213 01:28:12.791494 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-config-path\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791750 kubelet[2582]: I1213 01:28:12.791512 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-bpf-maps\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791983 kubelet[2582]: I1213 01:28:12.791529 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-xtables-lock\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791983 kubelet[2582]: I1213 01:28:12.791552 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58vrz\" (UniqueName: \"kubernetes.io/projected/57cc2170-d1ba-4ea7-9939-d69b04e26c2d-kube-api-access-58vrz\") pod \"kube-proxy-5f998\" (UID: \"57cc2170-d1ba-4ea7-9939-d69b04e26c2d\") " pod="kube-system/kube-proxy-5f998" Dec 13 01:28:12.791983 kubelet[2582]: I1213 01:28:12.791571 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-host-proc-sys-net\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791983 kubelet[2582]: I1213 01:28:12.791590 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-host-proc-sys-kernel\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.791983 kubelet[2582]: I1213 01:28:12.791608 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-cgroup\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:12.792115 kubelet[2582]: I1213 01:28:12.791628 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4pm4\" (UniqueName: \"kubernetes.io/projected/74de468a-fe4b-48a9-9e21-580c7909b725-kube-api-access-z4pm4\") pod \"cilium-s8tsj\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " pod="kube-system/cilium-s8tsj" Dec 13 01:28:13.392517 kubelet[2582]: I1213 01:28:13.392455 2582 topology_manager.go:215] "Topology Admit Handler" podUID="fd33d0e4-b9ac-4403-9125-df9c108452ae" podNamespace="kube-system" podName="cilium-operator-5cc964979-8v9dh" Dec 13 01:28:13.403942 systemd[1]: Created slice kubepods-besteffort-podfd33d0e4_b9ac_4403_9125_df9c108452ae.slice - libcontainer container kubepods-besteffort-podfd33d0e4_b9ac_4403_9125_df9c108452ae.slice. Dec 13 01:28:13.496687 kubelet[2582]: I1213 01:28:13.496616 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd33d0e4-b9ac-4403-9125-df9c108452ae-cilium-config-path\") pod \"cilium-operator-5cc964979-8v9dh\" (UID: \"fd33d0e4-b9ac-4403-9125-df9c108452ae\") " pod="kube-system/cilium-operator-5cc964979-8v9dh" Dec 13 01:28:13.496687 kubelet[2582]: I1213 01:28:13.496679 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khh2z\" (UniqueName: \"kubernetes.io/projected/fd33d0e4-b9ac-4403-9125-df9c108452ae-kube-api-access-khh2z\") pod \"cilium-operator-5cc964979-8v9dh\" (UID: \"fd33d0e4-b9ac-4403-9125-df9c108452ae\") " pod="kube-system/cilium-operator-5cc964979-8v9dh" Dec 13 01:28:13.893500 kubelet[2582]: E1213 01:28:13.893435 2582 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:28:13.893692 kubelet[2582]: E1213 01:28:13.893602 2582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57cc2170-d1ba-4ea7-9939-d69b04e26c2d-kube-proxy podName:57cc2170-d1ba-4ea7-9939-d69b04e26c2d nodeName:}" failed. No retries permitted until 2024-12-13 01:28:14.393557873 +0000 UTC m=+14.522853869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/57cc2170-d1ba-4ea7-9939-d69b04e26c2d-kube-proxy") pod "kube-proxy-5f998" (UID: "57cc2170-d1ba-4ea7-9939-d69b04e26c2d") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:28:14.026148 kubelet[2582]: E1213 01:28:14.026064 2582 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:28:14.026148 kubelet[2582]: E1213 01:28:14.026137 2582 projected.go:200] Error preparing data for projected volume kube-api-access-58vrz for pod kube-system/kube-proxy-5f998: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:28:14.026415 kubelet[2582]: E1213 01:28:14.026254 2582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/57cc2170-d1ba-4ea7-9939-d69b04e26c2d-kube-api-access-58vrz podName:57cc2170-d1ba-4ea7-9939-d69b04e26c2d nodeName:}" failed. No retries permitted until 2024-12-13 01:28:14.526228011 +0000 UTC m=+14.655524007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-58vrz" (UniqueName: "kubernetes.io/projected/57cc2170-d1ba-4ea7-9939-d69b04e26c2d-kube-api-access-58vrz") pod "kube-proxy-5f998" (UID: "57cc2170-d1ba-4ea7-9939-d69b04e26c2d") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:28:14.027501 kubelet[2582]: E1213 01:28:14.027336 2582 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:28:14.027501 kubelet[2582]: E1213 01:28:14.027354 2582 projected.go:200] Error preparing data for projected volume kube-api-access-z4pm4 for pod kube-system/cilium-s8tsj: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:28:14.027501 kubelet[2582]: E1213 01:28:14.027388 2582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74de468a-fe4b-48a9-9e21-580c7909b725-kube-api-access-z4pm4 podName:74de468a-fe4b-48a9-9e21-580c7909b725 nodeName:}" failed. No retries permitted until 2024-12-13 01:28:14.527379569 +0000 UTC m=+14.656675565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z4pm4" (UniqueName: "kubernetes.io/projected/74de468a-fe4b-48a9-9e21-580c7909b725-kube-api-access-z4pm4") pod "cilium-s8tsj" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:28:14.309041 kubelet[2582]: E1213 01:28:14.308984 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:14.309907 containerd[1462]: time="2024-12-13T01:28:14.309856734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-8v9dh,Uid:fd33d0e4-b9ac-4403-9125-df9c108452ae,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:14.819540 kubelet[2582]: E1213 01:28:14.819469 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:14.820935 containerd[1462]: time="2024-12-13T01:28:14.820214764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5f998,Uid:57cc2170-d1ba-4ea7-9939-d69b04e26c2d,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:14.829678 kubelet[2582]: E1213 01:28:14.829603 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:14.830431 containerd[1462]: time="2024-12-13T01:28:14.830371756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s8tsj,Uid:74de468a-fe4b-48a9-9e21-580c7909b725,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:14.876320 containerd[1462]: time="2024-12-13T01:28:14.876159124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:14.876320 containerd[1462]: time="2024-12-13T01:28:14.876246291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:14.876320 containerd[1462]: time="2024-12-13T01:28:14.876260670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:14.876536 containerd[1462]: time="2024-12-13T01:28:14.876386377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:14.908290 systemd[1]: Started cri-containerd-548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac.scope - libcontainer container 548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac. Dec 13 01:28:14.969958 containerd[1462]: time="2024-12-13T01:28:14.969868391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-8v9dh,Uid:fd33d0e4-b9ac-4403-9125-df9c108452ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac\"" Dec 13 01:28:14.970984 kubelet[2582]: E1213 01:28:14.970944 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:14.972484 containerd[1462]: time="2024-12-13T01:28:14.972446909Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:28:15.584081 containerd[1462]: time="2024-12-13T01:28:15.583960313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:15.584081 containerd[1462]: time="2024-12-13T01:28:15.584030155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:15.584081 containerd[1462]: time="2024-12-13T01:28:15.584041408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:15.584845 containerd[1462]: time="2024-12-13T01:28:15.584137993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:15.588988 containerd[1462]: time="2024-12-13T01:28:15.585743300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:15.588988 containerd[1462]: time="2024-12-13T01:28:15.585828654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:15.588988 containerd[1462]: time="2024-12-13T01:28:15.585842372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:15.588988 containerd[1462]: time="2024-12-13T01:28:15.585979640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:15.617156 systemd[1]: Started cri-containerd-0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db.scope - libcontainer container 0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db. Dec 13 01:28:15.619075 systemd[1]: Started cri-containerd-a46548bb6d2d7a7b84b67138ffa2b71e7ad2bd17c21257a32a36859a19f4c9da.scope - libcontainer container a46548bb6d2d7a7b84b67138ffa2b71e7ad2bd17c21257a32a36859a19f4c9da. Dec 13 01:28:15.651600 containerd[1462]: time="2024-12-13T01:28:15.651504755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5f998,Uid:57cc2170-d1ba-4ea7-9939-d69b04e26c2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a46548bb6d2d7a7b84b67138ffa2b71e7ad2bd17c21257a32a36859a19f4c9da\"" Dec 13 01:28:15.652746 kubelet[2582]: E1213 01:28:15.652714 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:15.655635 containerd[1462]: time="2024-12-13T01:28:15.655426412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s8tsj,Uid:74de468a-fe4b-48a9-9e21-580c7909b725,Namespace:kube-system,Attempt:0,} returns sandbox id \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\"" Dec 13 01:28:15.657724 containerd[1462]: time="2024-12-13T01:28:15.657679524Z" level=info msg="CreateContainer within sandbox \"a46548bb6d2d7a7b84b67138ffa2b71e7ad2bd17c21257a32a36859a19f4c9da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:28:15.658161 kubelet[2582]: E1213 01:28:15.658077 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:15.810199 containerd[1462]: time="2024-12-13T01:28:15.810136457Z" level=info msg="CreateContainer within sandbox \"a46548bb6d2d7a7b84b67138ffa2b71e7ad2bd17c21257a32a36859a19f4c9da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e63fa44f1693810a6aaf9d695ecad8839012a1beb23c8be475ca83f4fca3faaa\"" Dec 13 01:28:15.811517 containerd[1462]: time="2024-12-13T01:28:15.810723588Z" level=info msg="StartContainer for \"e63fa44f1693810a6aaf9d695ecad8839012a1beb23c8be475ca83f4fca3faaa\"" Dec 13 01:28:15.844213 systemd[1]: Started cri-containerd-e63fa44f1693810a6aaf9d695ecad8839012a1beb23c8be475ca83f4fca3faaa.scope - libcontainer container e63fa44f1693810a6aaf9d695ecad8839012a1beb23c8be475ca83f4fca3faaa. Dec 13 01:28:15.878755 containerd[1462]: time="2024-12-13T01:28:15.878696299Z" level=info msg="StartContainer for \"e63fa44f1693810a6aaf9d695ecad8839012a1beb23c8be475ca83f4fca3faaa\" returns successfully" Dec 13 01:28:16.070225 kubelet[2582]: E1213 01:28:16.070170 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:16.083828 kubelet[2582]: I1213 01:28:16.083768 2582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5f998" podStartSLOduration=4.083703006 podStartE2EDuration="4.083703006s" podCreationTimestamp="2024-12-13 01:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:16.08341751 +0000 UTC m=+16.212713506" watchObservedRunningTime="2024-12-13 01:28:16.083703006 +0000 UTC m=+16.212999002" Dec 13 01:28:16.569047 systemd[1]: run-containerd-runc-k8s.io-0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db-runc.xLW0JH.mount: Deactivated successfully. Dec 13 01:28:18.056885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3948488277.mount: Deactivated successfully. Dec 13 01:28:19.612569 containerd[1462]: time="2024-12-13T01:28:19.612277644Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:19.613869 containerd[1462]: time="2024-12-13T01:28:19.613724610Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907249" Dec 13 01:28:19.615233 containerd[1462]: time="2024-12-13T01:28:19.615161514Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:19.618736 containerd[1462]: time="2024-12-13T01:28:19.617742531Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.645232965s" Dec 13 01:28:19.618736 containerd[1462]: time="2024-12-13T01:28:19.617822300Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:28:19.632639 containerd[1462]: time="2024-12-13T01:28:19.630424929Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:28:19.632639 containerd[1462]: time="2024-12-13T01:28:19.632124368Z" level=info msg="CreateContainer within sandbox \"548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:28:19.677854 containerd[1462]: time="2024-12-13T01:28:19.677599344Z" level=info msg="CreateContainer within sandbox \"548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\"" Dec 13 01:28:19.682138 containerd[1462]: time="2024-12-13T01:28:19.678592114Z" level=info msg="StartContainer for \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\"" Dec 13 01:28:19.739322 systemd[1]: Started cri-containerd-3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068.scope - libcontainer container 3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068. Dec 13 01:28:19.830734 containerd[1462]: time="2024-12-13T01:28:19.830631107Z" level=info msg="StartContainer for \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\" returns successfully" Dec 13 01:28:20.188404 kubelet[2582]: E1213 01:28:20.188323 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:21.179133 kubelet[2582]: E1213 01:28:21.179090 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:26.213581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1361027084.mount: Deactivated successfully. Dec 13 01:28:30.221348 containerd[1462]: time="2024-12-13T01:28:30.221256520Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:30.222773 containerd[1462]: time="2024-12-13T01:28:30.222704018Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734767" Dec 13 01:28:30.224552 containerd[1462]: time="2024-12-13T01:28:30.224502029Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:30.226826 containerd[1462]: time="2024-12-13T01:28:30.226763809Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.596269992s" Dec 13 01:28:30.226826 containerd[1462]: time="2024-12-13T01:28:30.226817705Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:28:30.229934 containerd[1462]: time="2024-12-13T01:28:30.229862397Z" level=info msg="CreateContainer within sandbox \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:28:30.253651 containerd[1462]: time="2024-12-13T01:28:30.253567709Z" level=info msg="CreateContainer within sandbox \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf\"" Dec 13 01:28:30.254394 containerd[1462]: time="2024-12-13T01:28:30.254350008Z" level=info msg="StartContainer for \"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf\"" Dec 13 01:28:30.299106 systemd[1]: Started cri-containerd-7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf.scope - libcontainer container 7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf. Dec 13 01:28:30.335778 containerd[1462]: time="2024-12-13T01:28:30.335702591Z" level=info msg="StartContainer for \"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf\" returns successfully" Dec 13 01:28:30.351795 systemd[1]: cri-containerd-7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf.scope: Deactivated successfully. Dec 13 01:28:31.209848 kubelet[2582]: E1213 01:28:31.209811 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:31.245566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf-rootfs.mount: Deactivated successfully. Dec 13 01:28:31.411389 kubelet[2582]: I1213 01:28:31.411282 2582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-8v9dh" podStartSLOduration=13.656140195999999 podStartE2EDuration="18.308535565s" podCreationTimestamp="2024-12-13 01:28:13 +0000 UTC" firstStartedPulling="2024-12-13 01:28:14.972000509 +0000 UTC m=+15.101296505" lastFinishedPulling="2024-12-13 01:28:19.624395878 +0000 UTC m=+19.753691874" observedRunningTime="2024-12-13 01:28:20.285281315 +0000 UTC m=+20.414577311" watchObservedRunningTime="2024-12-13 01:28:31.308535565 +0000 UTC m=+31.437831571" Dec 13 01:28:31.420388 containerd[1462]: time="2024-12-13T01:28:31.417532139Z" level=info msg="shim disconnected" id=7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf namespace=k8s.io Dec 13 01:28:31.420388 containerd[1462]: time="2024-12-13T01:28:31.420376504Z" level=warning msg="cleaning up after shim disconnected" id=7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf namespace=k8s.io Dec 13 01:28:31.420388 containerd[1462]: time="2024-12-13T01:28:31.420399320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:32.214455 kubelet[2582]: E1213 01:28:32.213884 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:32.217280 containerd[1462]: time="2024-12-13T01:28:32.217216116Z" level=info msg="CreateContainer within sandbox \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:28:32.675449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount457502383.mount: Deactivated successfully. Dec 13 01:28:32.842864 containerd[1462]: time="2024-12-13T01:28:32.842798997Z" level=info msg="CreateContainer within sandbox \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb\"" Dec 13 01:28:32.843934 containerd[1462]: time="2024-12-13T01:28:32.843866655Z" level=info msg="StartContainer for \"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb\"" Dec 13 01:28:32.887259 systemd[1]: Started cri-containerd-290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb.scope - libcontainer container 290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb. Dec 13 01:28:32.961851 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:28:32.962838 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:32.962969 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:28:32.973481 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:28:32.974009 systemd[1]: cri-containerd-290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb.scope: Deactivated successfully. Dec 13 01:28:33.089285 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:33.229387 containerd[1462]: time="2024-12-13T01:28:33.229121548Z" level=info msg="StartContainer for \"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb\" returns successfully" Dec 13 01:28:33.254815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb-rootfs.mount: Deactivated successfully. Dec 13 01:28:33.700263 containerd[1462]: time="2024-12-13T01:28:33.700179140Z" level=info msg="shim disconnected" id=290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb namespace=k8s.io Dec 13 01:28:33.700263 containerd[1462]: time="2024-12-13T01:28:33.700253446Z" level=warning msg="cleaning up after shim disconnected" id=290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb namespace=k8s.io Dec 13 01:28:33.700263 containerd[1462]: time="2024-12-13T01:28:33.700266862Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:34.239246 kubelet[2582]: E1213 01:28:34.238783 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:34.242377 containerd[1462]: time="2024-12-13T01:28:34.242310041Z" level=info msg="CreateContainer within sandbox \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:28:34.346824 systemd[1]: Started sshd@7-10.0.0.47:22-10.0.0.1:47052.service - OpenSSH per-connection server daemon (10.0.0.1:47052). Dec 13 01:28:34.349308 containerd[1462]: time="2024-12-13T01:28:34.349226957Z" level=info msg="CreateContainer within sandbox \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9\"" Dec 13 01:28:34.350987 containerd[1462]: time="2024-12-13T01:28:34.350212368Z" level=info msg="StartContainer for \"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9\"" Dec 13 01:28:34.399228 systemd[1]: Started cri-containerd-94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9.scope - libcontainer container 94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9. Dec 13 01:28:34.407135 sshd[3170]: Accepted publickey for core from 10.0.0.1 port 47052 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:34.409782 sshd[3170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:34.416982 systemd-logind[1443]: New session 8 of user core. Dec 13 01:28:34.426062 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:28:34.447361 containerd[1462]: time="2024-12-13T01:28:34.447277019Z" level=info msg="StartContainer for \"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9\" returns successfully" Dec 13 01:28:34.448121 systemd[1]: cri-containerd-94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9.scope: Deactivated successfully. Dec 13 01:28:34.475161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9-rootfs.mount: Deactivated successfully. Dec 13 01:28:34.480947 containerd[1462]: time="2024-12-13T01:28:34.480455641Z" level=info msg="shim disconnected" id=94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9 namespace=k8s.io Dec 13 01:28:34.480947 containerd[1462]: time="2024-12-13T01:28:34.480525819Z" level=warning msg="cleaning up after shim disconnected" id=94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9 namespace=k8s.io Dec 13 01:28:34.480947 containerd[1462]: time="2024-12-13T01:28:34.480537192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:34.571638 sshd[3170]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:34.577986 systemd[1]: sshd@7-10.0.0.47:22-10.0.0.1:47052.service: Deactivated successfully. Dec 13 01:28:34.581401 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:28:34.582484 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:28:34.583946 systemd-logind[1443]: Removed session 8. Dec 13 01:28:35.244419 kubelet[2582]: E1213 01:28:35.244358 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:35.248882 containerd[1462]: time="2024-12-13T01:28:35.248804427Z" level=info msg="CreateContainer within sandbox \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:28:35.280230 containerd[1462]: time="2024-12-13T01:28:35.280162925Z" level=info msg="CreateContainer within sandbox \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d\"" Dec 13 01:28:35.281813 containerd[1462]: time="2024-12-13T01:28:35.280832963Z" level=info msg="StartContainer for \"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d\"" Dec 13 01:28:35.323270 systemd[1]: Started cri-containerd-0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d.scope - libcontainer container 0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d. Dec 13 01:28:35.363373 systemd[1]: cri-containerd-0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d.scope: Deactivated successfully. Dec 13 01:28:35.366741 containerd[1462]: time="2024-12-13T01:28:35.366689850Z" level=info msg="StartContainer for \"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d\" returns successfully" Dec 13 01:28:35.389394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d-rootfs.mount: Deactivated successfully. Dec 13 01:28:35.399933 containerd[1462]: time="2024-12-13T01:28:35.399806216Z" level=info msg="shim disconnected" id=0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d namespace=k8s.io Dec 13 01:28:35.399933 containerd[1462]: time="2024-12-13T01:28:35.399908678Z" level=warning msg="cleaning up after shim disconnected" id=0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d namespace=k8s.io Dec 13 01:28:35.399933 containerd[1462]: time="2024-12-13T01:28:35.399925781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:36.249633 kubelet[2582]: E1213 01:28:36.249578 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:36.252195 containerd[1462]: time="2024-12-13T01:28:36.252126519Z" level=info msg="CreateContainer within sandbox \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:28:36.273351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766686945.mount: Deactivated successfully. Dec 13 01:28:36.279119 containerd[1462]: time="2024-12-13T01:28:36.279070227Z" level=info msg="CreateContainer within sandbox \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\"" Dec 13 01:28:36.279784 containerd[1462]: time="2024-12-13T01:28:36.279735995Z" level=info msg="StartContainer for \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\"" Dec 13 01:28:36.314381 systemd[1]: Started cri-containerd-9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628.scope - libcontainer container 9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628. Dec 13 01:28:36.355970 containerd[1462]: time="2024-12-13T01:28:36.355906360Z" level=info msg="StartContainer for \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\" returns successfully" Dec 13 01:28:36.531548 kubelet[2582]: I1213 01:28:36.531498 2582 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:28:36.556529 kubelet[2582]: I1213 01:28:36.556402 2582 topology_manager.go:215] "Topology Admit Handler" podUID="56954eec-0547-4745-8991-c8cdc817b542" podNamespace="kube-system" podName="coredns-76f75df574-thcsv" Dec 13 01:28:36.560320 kubelet[2582]: I1213 01:28:36.560286 2582 topology_manager.go:215] "Topology Admit Handler" podUID="f74c68ad-12bc-4e05-b236-1792d06bee72" podNamespace="kube-system" podName="coredns-76f75df574-6hbt8" Dec 13 01:28:36.570576 systemd[1]: Created slice kubepods-burstable-pod56954eec_0547_4745_8991_c8cdc817b542.slice - libcontainer container kubepods-burstable-pod56954eec_0547_4745_8991_c8cdc817b542.slice. Dec 13 01:28:36.579314 systemd[1]: Created slice kubepods-burstable-podf74c68ad_12bc_4e05_b236_1792d06bee72.slice - libcontainer container kubepods-burstable-podf74c68ad_12bc_4e05_b236_1792d06bee72.slice. Dec 13 01:28:36.683310 kubelet[2582]: I1213 01:28:36.683230 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn52g\" (UniqueName: \"kubernetes.io/projected/f74c68ad-12bc-4e05-b236-1792d06bee72-kube-api-access-hn52g\") pod \"coredns-76f75df574-6hbt8\" (UID: \"f74c68ad-12bc-4e05-b236-1792d06bee72\") " pod="kube-system/coredns-76f75df574-6hbt8" Dec 13 01:28:36.683310 kubelet[2582]: I1213 01:28:36.683316 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56954eec-0547-4745-8991-c8cdc817b542-config-volume\") pod \"coredns-76f75df574-thcsv\" (UID: \"56954eec-0547-4745-8991-c8cdc817b542\") " pod="kube-system/coredns-76f75df574-thcsv" Dec 13 01:28:36.683569 kubelet[2582]: I1213 01:28:36.683437 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scn7s\" (UniqueName: \"kubernetes.io/projected/56954eec-0547-4745-8991-c8cdc817b542-kube-api-access-scn7s\") pod \"coredns-76f75df574-thcsv\" (UID: \"56954eec-0547-4745-8991-c8cdc817b542\") " pod="kube-system/coredns-76f75df574-thcsv" Dec 13 01:28:36.683569 kubelet[2582]: I1213 01:28:36.683490 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f74c68ad-12bc-4e05-b236-1792d06bee72-config-volume\") pod \"coredns-76f75df574-6hbt8\" (UID: \"f74c68ad-12bc-4e05-b236-1792d06bee72\") " pod="kube-system/coredns-76f75df574-6hbt8" Dec 13 01:28:36.875672 kubelet[2582]: E1213 01:28:36.875473 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:36.877134 containerd[1462]: time="2024-12-13T01:28:36.876438853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-thcsv,Uid:56954eec-0547-4745-8991-c8cdc817b542,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:36.883535 kubelet[2582]: E1213 01:28:36.883461 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:36.884166 containerd[1462]: time="2024-12-13T01:28:36.884092591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6hbt8,Uid:f74c68ad-12bc-4e05-b236-1792d06bee72,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:37.255019 kubelet[2582]: E1213 01:28:37.254844 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:37.272095 kubelet[2582]: I1213 01:28:37.272035 2582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-s8tsj" podStartSLOduration=10.70341442 podStartE2EDuration="25.271976029s" podCreationTimestamp="2024-12-13 01:28:12 +0000 UTC" firstStartedPulling="2024-12-13 01:28:15.658717319 +0000 UTC m=+15.788013316" lastFinishedPulling="2024-12-13 01:28:30.227278929 +0000 UTC m=+30.356574925" observedRunningTime="2024-12-13 01:28:37.271321264 +0000 UTC m=+37.400617260" watchObservedRunningTime="2024-12-13 01:28:37.271976029 +0000 UTC m=+37.401272025" Dec 13 01:28:38.257164 kubelet[2582]: E1213 01:28:38.257119 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:38.809860 systemd-networkd[1406]: cilium_host: Link UP Dec 13 01:28:38.810368 systemd-networkd[1406]: cilium_net: Link UP Dec 13 01:28:38.810374 systemd-networkd[1406]: cilium_net: Gained carrier Dec 13 01:28:38.810692 systemd-networkd[1406]: cilium_host: Gained carrier Dec 13 01:28:38.811004 systemd-networkd[1406]: cilium_host: Gained IPv6LL Dec 13 01:28:38.894130 systemd-networkd[1406]: cilium_net: Gained IPv6LL Dec 13 01:28:38.942433 systemd-networkd[1406]: cilium_vxlan: Link UP Dec 13 01:28:38.942444 systemd-networkd[1406]: cilium_vxlan: Gained carrier Dec 13 01:28:39.189940 kernel: NET: Registered PF_ALG protocol family Dec 13 01:28:39.259028 kubelet[2582]: E1213 01:28:39.258978 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:39.585279 systemd[1]: Started sshd@8-10.0.0.47:22-10.0.0.1:57074.service - OpenSSH per-connection server daemon (10.0.0.1:57074). Dec 13 01:28:39.635463 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 57074 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:39.637858 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:39.644142 systemd-logind[1443]: New session 9 of user core. Dec 13 01:28:39.657254 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:28:39.818865 sshd[3658]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:39.824077 systemd[1]: sshd@8-10.0.0.47:22-10.0.0.1:57074.service: Deactivated successfully. Dec 13 01:28:39.826677 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:28:39.827568 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:28:39.828800 systemd-logind[1443]: Removed session 9. Dec 13 01:28:40.040283 systemd-networkd[1406]: lxc_health: Link UP Dec 13 01:28:40.049571 systemd-networkd[1406]: lxc_health: Gained carrier Dec 13 01:28:40.536645 systemd-networkd[1406]: lxcfa9cf33ecc06: Link UP Dec 13 01:28:40.545968 kernel: eth0: renamed from tmp0cb81 Dec 13 01:28:40.565531 systemd-networkd[1406]: lxca3b78ff5b9ba: Link UP Dec 13 01:28:40.566937 kernel: eth0: renamed from tmp33032 Dec 13 01:28:40.574356 systemd-networkd[1406]: lxcfa9cf33ecc06: Gained carrier Dec 13 01:28:40.575397 systemd-networkd[1406]: lxca3b78ff5b9ba: Gained carrier Dec 13 01:28:40.832994 kubelet[2582]: E1213 01:28:40.832583 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:40.879611 systemd-networkd[1406]: cilium_vxlan: Gained IPv6LL Dec 13 01:28:41.264998 systemd-networkd[1406]: lxc_health: Gained IPv6LL Dec 13 01:28:41.266586 kubelet[2582]: E1213 01:28:41.266550 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:42.094170 systemd-networkd[1406]: lxca3b78ff5b9ba: Gained IPv6LL Dec 13 01:28:42.158278 systemd-networkd[1406]: lxcfa9cf33ecc06: Gained IPv6LL Dec 13 01:28:42.269834 kubelet[2582]: E1213 01:28:42.269755 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:44.835037 systemd[1]: Started sshd@9-10.0.0.47:22-10.0.0.1:57076.service - OpenSSH per-connection server daemon (10.0.0.1:57076). Dec 13 01:28:44.880192 sshd[3837]: Accepted publickey for core from 10.0.0.1 port 57076 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:44.882612 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:44.887822 systemd-logind[1443]: New session 10 of user core. Dec 13 01:28:44.892055 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:28:45.060343 sshd[3837]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:45.066586 systemd[1]: sshd@9-10.0.0.47:22-10.0.0.1:57076.service: Deactivated successfully. Dec 13 01:28:45.072867 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:28:45.075169 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:28:45.079577 systemd-logind[1443]: Removed session 10. Dec 13 01:28:45.217570 containerd[1462]: time="2024-12-13T01:28:45.217313222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:45.217570 containerd[1462]: time="2024-12-13T01:28:45.217375573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:45.217570 containerd[1462]: time="2024-12-13T01:28:45.217389541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:45.218364 containerd[1462]: time="2024-12-13T01:28:45.217471259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:45.219955 containerd[1462]: time="2024-12-13T01:28:45.219505117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:45.219955 containerd[1462]: time="2024-12-13T01:28:45.219650319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:45.222801 containerd[1462]: time="2024-12-13T01:28:45.222682861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:45.225710 containerd[1462]: time="2024-12-13T01:28:45.222870496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:45.244362 systemd[1]: Started cri-containerd-0cb817077536b9dc71e1f09fdddef98a57930fdd9f85b585950f63dac5322a72.scope - libcontainer container 0cb817077536b9dc71e1f09fdddef98a57930fdd9f85b585950f63dac5322a72. Dec 13 01:28:45.258308 systemd[1]: Started cri-containerd-330322f5fb8d7a856834079144df3a36fc5e7304ad9d1d11d240677cf501ca83.scope - libcontainer container 330322f5fb8d7a856834079144df3a36fc5e7304ad9d1d11d240677cf501ca83. Dec 13 01:28:45.264821 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:28:45.274975 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:28:45.298261 containerd[1462]: time="2024-12-13T01:28:45.298207011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6hbt8,Uid:f74c68ad-12bc-4e05-b236-1792d06bee72,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cb817077536b9dc71e1f09fdddef98a57930fdd9f85b585950f63dac5322a72\"" Dec 13 01:28:45.301350 kubelet[2582]: E1213 01:28:45.301320 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:45.305146 containerd[1462]: time="2024-12-13T01:28:45.305097118Z" level=info msg="CreateContainer within sandbox \"0cb817077536b9dc71e1f09fdddef98a57930fdd9f85b585950f63dac5322a72\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:45.316550 containerd[1462]: time="2024-12-13T01:28:45.316503814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-thcsv,Uid:56954eec-0547-4745-8991-c8cdc817b542,Namespace:kube-system,Attempt:0,} returns sandbox id \"330322f5fb8d7a856834079144df3a36fc5e7304ad9d1d11d240677cf501ca83\"" Dec 13 01:28:45.318173 kubelet[2582]: E1213 01:28:45.318145 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:45.321675 containerd[1462]: time="2024-12-13T01:28:45.321385173Z" level=info msg="CreateContainer within sandbox \"330322f5fb8d7a856834079144df3a36fc5e7304ad9d1d11d240677cf501ca83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:45.336247 containerd[1462]: time="2024-12-13T01:28:45.336178208Z" level=info msg="CreateContainer within sandbox \"0cb817077536b9dc71e1f09fdddef98a57930fdd9f85b585950f63dac5322a72\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdb9485f4554a81a61fe2c1150dd4bccace5ddd206d30743e55a740601013c00\"" Dec 13 01:28:45.337955 containerd[1462]: time="2024-12-13T01:28:45.337137926Z" level=info msg="StartContainer for \"fdb9485f4554a81a61fe2c1150dd4bccace5ddd206d30743e55a740601013c00\"" Dec 13 01:28:45.359407 containerd[1462]: time="2024-12-13T01:28:45.359341500Z" level=info msg="CreateContainer within sandbox \"330322f5fb8d7a856834079144df3a36fc5e7304ad9d1d11d240677cf501ca83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bb835b867f360c1c35de83a632edfa1c6e39262c2561a285c0030b3f674664c\"" Dec 13 01:28:45.361471 containerd[1462]: time="2024-12-13T01:28:45.361427340Z" level=info msg="StartContainer for \"9bb835b867f360c1c35de83a632edfa1c6e39262c2561a285c0030b3f674664c\"" Dec 13 01:28:45.372197 systemd[1]: Started cri-containerd-fdb9485f4554a81a61fe2c1150dd4bccace5ddd206d30743e55a740601013c00.scope - libcontainer container fdb9485f4554a81a61fe2c1150dd4bccace5ddd206d30743e55a740601013c00. Dec 13 01:28:45.404298 systemd[1]: Started cri-containerd-9bb835b867f360c1c35de83a632edfa1c6e39262c2561a285c0030b3f674664c.scope - libcontainer container 9bb835b867f360c1c35de83a632edfa1c6e39262c2561a285c0030b3f674664c. Dec 13 01:28:45.435677 containerd[1462]: time="2024-12-13T01:28:45.434812788Z" level=info msg="StartContainer for \"fdb9485f4554a81a61fe2c1150dd4bccace5ddd206d30743e55a740601013c00\" returns successfully" Dec 13 01:28:45.449970 containerd[1462]: time="2024-12-13T01:28:45.449833357Z" level=info msg="StartContainer for \"9bb835b867f360c1c35de83a632edfa1c6e39262c2561a285c0030b3f674664c\" returns successfully" Dec 13 01:28:46.224362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464107657.mount: Deactivated successfully. Dec 13 01:28:46.280065 kubelet[2582]: E1213 01:28:46.279300 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:46.281581 kubelet[2582]: E1213 01:28:46.281543 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:46.306280 kubelet[2582]: I1213 01:28:46.306200 2582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-thcsv" podStartSLOduration=33.306142362 podStartE2EDuration="33.306142362s" podCreationTimestamp="2024-12-13 01:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:46.305240828 +0000 UTC m=+46.434536824" watchObservedRunningTime="2024-12-13 01:28:46.306142362 +0000 UTC m=+46.435438358" Dec 13 01:28:46.307013 kubelet[2582]: I1213 01:28:46.306321 2582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-6hbt8" podStartSLOduration=33.306294397 podStartE2EDuration="33.306294397s" podCreationTimestamp="2024-12-13 01:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:46.292234032 +0000 UTC m=+46.421530028" watchObservedRunningTime="2024-12-13 01:28:46.306294397 +0000 UTC m=+46.435590623" Dec 13 01:28:47.284060 kubelet[2582]: E1213 01:28:47.283999 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:47.284219 kubelet[2582]: E1213 01:28:47.284092 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:48.286617 kubelet[2582]: E1213 01:28:48.286547 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:48.287262 kubelet[2582]: E1213 01:28:48.286745 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:28:50.074230 systemd[1]: Started sshd@10-10.0.0.47:22-10.0.0.1:36834.service - OpenSSH per-connection server daemon (10.0.0.1:36834). Dec 13 01:28:50.120880 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 36834 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:50.123061 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:50.128613 systemd-logind[1443]: New session 11 of user core. Dec 13 01:28:50.139141 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:28:50.317127 sshd[4026]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:50.323032 systemd[1]: sshd@10-10.0.0.47:22-10.0.0.1:36834.service: Deactivated successfully. Dec 13 01:28:50.326283 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:28:50.328098 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:28:50.329595 systemd-logind[1443]: Removed session 11. Dec 13 01:28:55.348578 systemd[1]: Started sshd@11-10.0.0.47:22-10.0.0.1:36846.service - OpenSSH per-connection server daemon (10.0.0.1:36846). Dec 13 01:28:55.394881 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 36846 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:55.397864 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:55.405385 systemd-logind[1443]: New session 12 of user core. Dec 13 01:28:55.411398 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:28:55.590740 sshd[4043]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:55.607194 systemd[1]: sshd@11-10.0.0.47:22-10.0.0.1:36846.service: Deactivated successfully. Dec 13 01:28:55.612565 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:28:55.618276 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:28:55.635779 systemd[1]: Started sshd@12-10.0.0.47:22-10.0.0.1:36860.service - OpenSSH per-connection server daemon (10.0.0.1:36860). Dec 13 01:28:55.638740 systemd-logind[1443]: Removed session 12. Dec 13 01:28:55.687628 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 36860 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:55.690596 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:55.700777 systemd-logind[1443]: New session 13 of user core. Dec 13 01:28:55.716471 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:28:55.971089 sshd[4058]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:55.987964 systemd[1]: sshd@12-10.0.0.47:22-10.0.0.1:36860.service: Deactivated successfully. Dec 13 01:28:55.991715 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:28:55.995392 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:28:56.009538 systemd[1]: Started sshd@13-10.0.0.47:22-10.0.0.1:36876.service - OpenSSH per-connection server daemon (10.0.0.1:36876). Dec 13 01:28:56.012501 systemd-logind[1443]: Removed session 13. Dec 13 01:28:56.068923 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 36876 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:28:56.071809 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:56.094393 systemd-logind[1443]: New session 14 of user core. Dec 13 01:28:56.102461 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:28:56.292685 sshd[4071]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:56.299149 systemd[1]: sshd@13-10.0.0.47:22-10.0.0.1:36876.service: Deactivated successfully. Dec 13 01:28:56.302416 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:28:56.310064 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:28:56.312930 systemd-logind[1443]: Removed session 14. Dec 13 01:29:01.345574 systemd[1]: Started sshd@14-10.0.0.47:22-10.0.0.1:46726.service - OpenSSH per-connection server daemon (10.0.0.1:46726). Dec 13 01:29:01.395917 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 46726 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:01.398821 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:01.415299 systemd-logind[1443]: New session 15 of user core. Dec 13 01:29:01.428630 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:29:01.597872 sshd[4087]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:01.606782 systemd[1]: sshd@14-10.0.0.47:22-10.0.0.1:46726.service: Deactivated successfully. Dec 13 01:29:01.610347 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:29:01.612366 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:29:01.622772 systemd-logind[1443]: Removed session 15. Dec 13 01:29:06.620261 systemd[1]: Started sshd@15-10.0.0.47:22-10.0.0.1:55668.service - OpenSSH per-connection server daemon (10.0.0.1:55668). Dec 13 01:29:06.659959 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 55668 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:06.662593 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:06.669487 systemd-logind[1443]: New session 16 of user core. Dec 13 01:29:06.683318 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:29:06.818590 sshd[4101]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:06.824768 systemd[1]: sshd@15-10.0.0.47:22-10.0.0.1:55668.service: Deactivated successfully. Dec 13 01:29:06.828310 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:29:06.829623 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:29:06.831167 systemd-logind[1443]: Removed session 16. Dec 13 01:29:08.028291 kubelet[2582]: E1213 01:29:08.028175 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:11.837621 systemd[1]: Started sshd@16-10.0.0.47:22-10.0.0.1:55670.service - OpenSSH per-connection server daemon (10.0.0.1:55670). Dec 13 01:29:11.884286 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 55670 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:11.886689 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:11.892739 systemd-logind[1443]: New session 17 of user core. Dec 13 01:29:11.901206 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:29:12.041693 sshd[4115]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:12.053182 systemd[1]: sshd@16-10.0.0.47:22-10.0.0.1:55670.service: Deactivated successfully. Dec 13 01:29:12.056046 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:29:12.058726 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:29:12.071508 systemd[1]: Started sshd@17-10.0.0.47:22-10.0.0.1:55676.service - OpenSSH per-connection server daemon (10.0.0.1:55676). Dec 13 01:29:12.073055 systemd-logind[1443]: Removed session 17. Dec 13 01:29:12.111214 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 55676 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:12.113483 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:12.119700 systemd-logind[1443]: New session 18 of user core. Dec 13 01:29:12.136281 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:29:12.441000 sshd[4129]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:12.457088 systemd[1]: sshd@17-10.0.0.47:22-10.0.0.1:55676.service: Deactivated successfully. Dec 13 01:29:12.459562 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:29:12.462186 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:29:12.469395 systemd[1]: Started sshd@18-10.0.0.47:22-10.0.0.1:55678.service - OpenSSH per-connection server daemon (10.0.0.1:55678). Dec 13 01:29:12.470806 systemd-logind[1443]: Removed session 18. Dec 13 01:29:12.509152 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 55678 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:12.511664 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:12.517757 systemd-logind[1443]: New session 19 of user core. Dec 13 01:29:12.527187 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:29:14.711314 sshd[4141]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:14.723433 systemd[1]: sshd@18-10.0.0.47:22-10.0.0.1:55678.service: Deactivated successfully. Dec 13 01:29:14.725442 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:29:14.726964 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:29:14.733379 systemd[1]: Started sshd@19-10.0.0.47:22-10.0.0.1:55680.service - OpenSSH per-connection server daemon (10.0.0.1:55680). Dec 13 01:29:14.735055 systemd-logind[1443]: Removed session 19. Dec 13 01:29:14.772575 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 55680 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:14.774823 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:14.780134 systemd-logind[1443]: New session 20 of user core. Dec 13 01:29:14.788183 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:29:15.227086 sshd[4161]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:15.235281 systemd[1]: sshd@19-10.0.0.47:22-10.0.0.1:55680.service: Deactivated successfully. Dec 13 01:29:15.237659 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:29:15.239436 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:29:15.246413 systemd[1]: Started sshd@20-10.0.0.47:22-10.0.0.1:55694.service - OpenSSH per-connection server daemon (10.0.0.1:55694). Dec 13 01:29:15.248039 systemd-logind[1443]: Removed session 20. Dec 13 01:29:15.277462 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 55694 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:15.279621 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:15.285848 systemd-logind[1443]: New session 21 of user core. Dec 13 01:29:15.297185 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:29:15.420988 sshd[4175]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:15.426603 systemd[1]: sshd@20-10.0.0.47:22-10.0.0.1:55694.service: Deactivated successfully. Dec 13 01:29:15.429080 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:29:15.429761 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:29:15.430943 systemd-logind[1443]: Removed session 21. Dec 13 01:29:20.447373 systemd[1]: Started sshd@21-10.0.0.47:22-10.0.0.1:36216.service - OpenSSH per-connection server daemon (10.0.0.1:36216). Dec 13 01:29:20.485760 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 36216 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:20.488339 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:20.493747 systemd-logind[1443]: New session 22 of user core. Dec 13 01:29:20.507240 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:29:20.652980 sshd[4192]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:20.657408 systemd[1]: sshd@21-10.0.0.47:22-10.0.0.1:36216.service: Deactivated successfully. Dec 13 01:29:20.660553 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:29:20.663395 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:29:20.665545 systemd-logind[1443]: Removed session 22. Dec 13 01:29:25.669104 systemd[1]: Started sshd@22-10.0.0.47:22-10.0.0.1:36230.service - OpenSSH per-connection server daemon (10.0.0.1:36230). Dec 13 01:29:25.720012 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 36230 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:25.723232 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:25.728727 systemd-logind[1443]: New session 23 of user core. Dec 13 01:29:25.736238 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:29:25.857148 sshd[4208]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:25.860576 systemd[1]: sshd@22-10.0.0.47:22-10.0.0.1:36230.service: Deactivated successfully. Dec 13 01:29:25.862815 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:29:25.864832 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:29:25.866000 systemd-logind[1443]: Removed session 23. Dec 13 01:29:29.026455 kubelet[2582]: E1213 01:29:29.026386 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:30.877084 systemd[1]: Started sshd@23-10.0.0.47:22-10.0.0.1:34842.service - OpenSSH per-connection server daemon (10.0.0.1:34842). Dec 13 01:29:30.919976 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 34842 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:30.922265 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:30.927196 systemd-logind[1443]: New session 24 of user core. Dec 13 01:29:30.939137 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:29:31.025066 kubelet[2582]: E1213 01:29:31.025000 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:31.056143 sshd[4226]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:31.062809 systemd[1]: sshd@23-10.0.0.47:22-10.0.0.1:34842.service: Deactivated successfully. Dec 13 01:29:31.066145 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:29:31.067145 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:29:31.068648 systemd-logind[1443]: Removed session 24. Dec 13 01:29:36.071019 systemd[1]: Started sshd@24-10.0.0.47:22-10.0.0.1:54834.service - OpenSSH per-connection server daemon (10.0.0.1:54834). Dec 13 01:29:36.105791 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 54834 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:36.107585 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:36.111601 systemd-logind[1443]: New session 25 of user core. Dec 13 01:29:36.124055 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:29:36.229692 sshd[4240]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:36.233305 systemd[1]: sshd@24-10.0.0.47:22-10.0.0.1:54834.service: Deactivated successfully. Dec 13 01:29:36.235120 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:29:36.235661 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:29:36.236447 systemd-logind[1443]: Removed session 25. Dec 13 01:29:37.026047 kubelet[2582]: E1213 01:29:37.025988 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:41.241148 systemd[1]: Started sshd@25-10.0.0.47:22-10.0.0.1:54838.service - OpenSSH per-connection server daemon (10.0.0.1:54838). Dec 13 01:29:41.276025 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 54838 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:41.277627 sshd[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:41.281702 systemd-logind[1443]: New session 26 of user core. Dec 13 01:29:41.288041 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:29:41.398018 sshd[4255]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:41.406850 systemd[1]: sshd@25-10.0.0.47:22-10.0.0.1:54838.service: Deactivated successfully. Dec 13 01:29:41.408910 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:29:41.410539 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:29:41.418150 systemd[1]: Started sshd@26-10.0.0.47:22-10.0.0.1:54854.service - OpenSSH per-connection server daemon (10.0.0.1:54854). Dec 13 01:29:41.419134 systemd-logind[1443]: Removed session 26. Dec 13 01:29:41.448421 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 54854 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:41.450154 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:41.454372 systemd-logind[1443]: New session 27 of user core. Dec 13 01:29:41.469061 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:29:42.818394 containerd[1462]: time="2024-12-13T01:29:42.818338719Z" level=info msg="StopContainer for \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\" with timeout 30 (s)" Dec 13 01:29:42.819706 containerd[1462]: time="2024-12-13T01:29:42.819663064Z" level=info msg="Stop container \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\" with signal terminated" Dec 13 01:29:42.855170 systemd[1]: run-containerd-runc-k8s.io-9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628-runc.XI6rvh.mount: Deactivated successfully. Dec 13 01:29:42.858440 systemd[1]: cri-containerd-3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068.scope: Deactivated successfully. Dec 13 01:29:42.881470 containerd[1462]: time="2024-12-13T01:29:42.881392917Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:42.886191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068-rootfs.mount: Deactivated successfully. Dec 13 01:29:42.886510 containerd[1462]: time="2024-12-13T01:29:42.886348623Z" level=info msg="StopContainer for \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\" with timeout 2 (s)" Dec 13 01:29:42.887122 containerd[1462]: time="2024-12-13T01:29:42.887035918Z" level=info msg="Stop container \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\" with signal terminated" Dec 13 01:29:42.894791 containerd[1462]: time="2024-12-13T01:29:42.894716727Z" level=info msg="shim disconnected" id=3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068 namespace=k8s.io Dec 13 01:29:42.894791 containerd[1462]: time="2024-12-13T01:29:42.894775559Z" level=warning msg="cleaning up after shim disconnected" id=3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068 namespace=k8s.io Dec 13 01:29:42.894791 containerd[1462]: time="2024-12-13T01:29:42.894786290Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:42.895620 systemd-networkd[1406]: lxc_health: Link DOWN Dec 13 01:29:42.895630 systemd-networkd[1406]: lxc_health: Lost carrier Dec 13 01:29:42.915106 containerd[1462]: time="2024-12-13T01:29:42.915037860Z" level=info msg="StopContainer for \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\" returns successfully" Dec 13 01:29:42.916054 containerd[1462]: time="2024-12-13T01:29:42.916008603Z" level=info msg="StopPodSandbox for \"548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac\"" Dec 13 01:29:42.921763 systemd[1]: cri-containerd-9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628.scope: Deactivated successfully. Dec 13 01:29:42.922123 systemd[1]: cri-containerd-9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628.scope: Consumed 9.006s CPU time. Dec 13 01:29:42.928553 containerd[1462]: time="2024-12-13T01:29:42.916079818Z" level=info msg="Container to stop \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:29:42.930780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac-shm.mount: Deactivated successfully. Dec 13 01:29:42.943606 systemd[1]: cri-containerd-548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac.scope: Deactivated successfully. Dec 13 01:29:42.963573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628-rootfs.mount: Deactivated successfully. Dec 13 01:29:42.972862 containerd[1462]: time="2024-12-13T01:29:42.972761959Z" level=info msg="shim disconnected" id=548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac namespace=k8s.io Dec 13 01:29:42.972862 containerd[1462]: time="2024-12-13T01:29:42.972836521Z" level=warning msg="cleaning up after shim disconnected" id=548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac namespace=k8s.io Dec 13 01:29:42.972862 containerd[1462]: time="2024-12-13T01:29:42.972850167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:42.973306 containerd[1462]: time="2024-12-13T01:29:42.973203217Z" level=info msg="shim disconnected" id=9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628 namespace=k8s.io Dec 13 01:29:42.973306 containerd[1462]: time="2024-12-13T01:29:42.973271507Z" level=warning msg="cleaning up after shim disconnected" id=9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628 namespace=k8s.io Dec 13 01:29:42.973306 containerd[1462]: time="2024-12-13T01:29:42.973282648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:43.004345 containerd[1462]: time="2024-12-13T01:29:43.004277580Z" level=info msg="StopContainer for \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\" returns successfully" Dec 13 01:29:43.005034 containerd[1462]: time="2024-12-13T01:29:43.004974343Z" level=info msg="StopPodSandbox for \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\"" Dec 13 01:29:43.005034 containerd[1462]: time="2024-12-13T01:29:43.005030239Z" level=info msg="Container to stop \"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:29:43.005034 containerd[1462]: time="2024-12-13T01:29:43.005048543Z" level=info msg="Container to stop \"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:29:43.005241 containerd[1462]: time="2024-12-13T01:29:43.005061097Z" level=info msg="Container to stop \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:29:43.005241 containerd[1462]: time="2024-12-13T01:29:43.005074142Z" level=info msg="Container to stop \"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:29:43.005241 containerd[1462]: time="2024-12-13T01:29:43.005090803Z" level=info msg="Container to stop \"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:29:43.007532 containerd[1462]: time="2024-12-13T01:29:43.007478866Z" level=info msg="TearDown network for sandbox \"548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac\" successfully" Dec 13 01:29:43.007532 containerd[1462]: time="2024-12-13T01:29:43.007525284Z" level=info msg="StopPodSandbox for \"548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac\" returns successfully" Dec 13 01:29:43.014722 systemd[1]: cri-containerd-0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db.scope: Deactivated successfully. Dec 13 01:29:43.046746 containerd[1462]: time="2024-12-13T01:29:43.046677068Z" level=info msg="shim disconnected" id=0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db namespace=k8s.io Dec 13 01:29:43.046746 containerd[1462]: time="2024-12-13T01:29:43.046734337Z" level=warning msg="cleaning up after shim disconnected" id=0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db namespace=k8s.io Dec 13 01:29:43.046746 containerd[1462]: time="2024-12-13T01:29:43.046743084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:43.064079 containerd[1462]: time="2024-12-13T01:29:43.064028459Z" level=info msg="TearDown network for sandbox \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" successfully" Dec 13 01:29:43.064079 containerd[1462]: time="2024-12-13T01:29:43.064070909Z" level=info msg="StopPodSandbox for \"0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db\" returns successfully" Dec 13 01:29:43.136702 kubelet[2582]: I1213 01:29:43.136534 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khh2z\" (UniqueName: \"kubernetes.io/projected/fd33d0e4-b9ac-4403-9125-df9c108452ae-kube-api-access-khh2z\") pod \"fd33d0e4-b9ac-4403-9125-df9c108452ae\" (UID: \"fd33d0e4-b9ac-4403-9125-df9c108452ae\") " Dec 13 01:29:43.136702 kubelet[2582]: I1213 01:29:43.136579 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd33d0e4-b9ac-4403-9125-df9c108452ae-cilium-config-path\") pod \"fd33d0e4-b9ac-4403-9125-df9c108452ae\" (UID: \"fd33d0e4-b9ac-4403-9125-df9c108452ae\") " Dec 13 01:29:43.136702 kubelet[2582]: I1213 01:29:43.136599 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cni-path\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.136702 kubelet[2582]: I1213 01:29:43.136615 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-host-proc-sys-net\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.136702 kubelet[2582]: I1213 01:29:43.136631 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-xtables-lock\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.136702 kubelet[2582]: I1213 01:29:43.136663 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4pm4\" (UniqueName: \"kubernetes.io/projected/74de468a-fe4b-48a9-9e21-580c7909b725-kube-api-access-z4pm4\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.137360 kubelet[2582]: I1213 01:29:43.137056 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cni-path" (OuterVolumeSpecName: "cni-path") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.137360 kubelet[2582]: I1213 01:29:43.137246 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.137360 kubelet[2582]: I1213 01:29:43.137293 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.140489 kubelet[2582]: I1213 01:29:43.140441 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd33d0e4-b9ac-4403-9125-df9c108452ae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fd33d0e4-b9ac-4403-9125-df9c108452ae" (UID: "fd33d0e4-b9ac-4403-9125-df9c108452ae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:29:43.140743 kubelet[2582]: I1213 01:29:43.140714 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd33d0e4-b9ac-4403-9125-df9c108452ae-kube-api-access-khh2z" (OuterVolumeSpecName: "kube-api-access-khh2z") pod "fd33d0e4-b9ac-4403-9125-df9c108452ae" (UID: "fd33d0e4-b9ac-4403-9125-df9c108452ae"). InnerVolumeSpecName "kube-api-access-khh2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:29:43.141003 kubelet[2582]: I1213 01:29:43.140979 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74de468a-fe4b-48a9-9e21-580c7909b725-kube-api-access-z4pm4" (OuterVolumeSpecName: "kube-api-access-z4pm4") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "kube-api-access-z4pm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:29:43.237526 kubelet[2582]: I1213 01:29:43.237449 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-host-proc-sys-kernel\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.237526 kubelet[2582]: I1213 01:29:43.237513 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-cgroup\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.237526 kubelet[2582]: I1213 01:29:43.237551 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74de468a-fe4b-48a9-9e21-580c7909b725-hubble-tls\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.237765 kubelet[2582]: I1213 01:29:43.237575 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74de468a-fe4b-48a9-9e21-580c7909b725-clustermesh-secrets\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.237765 kubelet[2582]: I1213 01:29:43.237596 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-etc-cni-netd\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.237765 kubelet[2582]: I1213 01:29:43.237587 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.237765 kubelet[2582]: I1213 01:29:43.237619 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-config-path\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.237765 kubelet[2582]: I1213 01:29:43.237639 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-lib-modules\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.237965 kubelet[2582]: I1213 01:29:43.237591 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.237965 kubelet[2582]: I1213 01:29:43.237664 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-run\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.237965 kubelet[2582]: I1213 01:29:43.237688 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-hostproc\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.237965 kubelet[2582]: I1213 01:29:43.237703 2582 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-bpf-maps\") pod \"74de468a-fe4b-48a9-9e21-580c7909b725\" (UID: \"74de468a-fe4b-48a9-9e21-580c7909b725\") " Dec 13 01:29:43.237965 kubelet[2582]: I1213 01:29:43.237742 2582 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-khh2z\" (UniqueName: \"kubernetes.io/projected/fd33d0e4-b9ac-4403-9125-df9c108452ae-kube-api-access-khh2z\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.237965 kubelet[2582]: I1213 01:29:43.237755 2582 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd33d0e4-b9ac-4403-9125-df9c108452ae-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.238114 kubelet[2582]: I1213 01:29:43.237793 2582 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.238114 kubelet[2582]: I1213 01:29:43.237803 2582 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.238114 kubelet[2582]: I1213 01:29:43.237813 2582 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.238114 kubelet[2582]: I1213 01:29:43.237822 2582 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z4pm4\" (UniqueName: \"kubernetes.io/projected/74de468a-fe4b-48a9-9e21-580c7909b725-kube-api-access-z4pm4\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.238114 kubelet[2582]: I1213 01:29:43.237831 2582 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.238114 kubelet[2582]: I1213 01:29:43.237842 2582 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.238114 kubelet[2582]: I1213 01:29:43.237921 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.238274 kubelet[2582]: I1213 01:29:43.237953 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.238274 kubelet[2582]: I1213 01:29:43.238154 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.238274 kubelet[2582]: I1213 01:29:43.238185 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.238274 kubelet[2582]: I1213 01:29:43.238209 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-hostproc" (OuterVolumeSpecName: "hostproc") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:29:43.241358 kubelet[2582]: I1213 01:29:43.241290 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74de468a-fe4b-48a9-9e21-580c7909b725-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:29:43.241508 kubelet[2582]: I1213 01:29:43.241445 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74de468a-fe4b-48a9-9e21-580c7909b725-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:29:43.241563 kubelet[2582]: I1213 01:29:43.241532 2582 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "74de468a-fe4b-48a9-9e21-580c7909b725" (UID: "74de468a-fe4b-48a9-9e21-580c7909b725"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:29:43.338038 kubelet[2582]: I1213 01:29:43.337974 2582 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.338038 kubelet[2582]: I1213 01:29:43.338016 2582 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.338038 kubelet[2582]: I1213 01:29:43.338029 2582 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.338038 kubelet[2582]: I1213 01:29:43.338040 2582 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.338038 kubelet[2582]: I1213 01:29:43.338050 2582 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.338038 kubelet[2582]: I1213 01:29:43.338061 2582 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74de468a-fe4b-48a9-9e21-580c7909b725-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.338370 kubelet[2582]: I1213 01:29:43.338072 2582 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74de468a-fe4b-48a9-9e21-580c7909b725-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.338370 kubelet[2582]: I1213 01:29:43.338084 2582 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74de468a-fe4b-48a9-9e21-580c7909b725-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:29:43.483460 systemd[1]: Removed slice kubepods-besteffort-podfd33d0e4_b9ac_4403_9125_df9c108452ae.slice - libcontainer container kubepods-besteffort-podfd33d0e4_b9ac_4403_9125_df9c108452ae.slice. Dec 13 01:29:43.487450 systemd[1]: Removed slice kubepods-burstable-pod74de468a_fe4b_48a9_9e21_580c7909b725.slice - libcontainer container kubepods-burstable-pod74de468a_fe4b_48a9_9e21_580c7909b725.slice. Dec 13 01:29:43.487687 systemd[1]: kubepods-burstable-pod74de468a_fe4b_48a9_9e21_580c7909b725.slice: Consumed 9.137s CPU time. Dec 13 01:29:43.489317 kubelet[2582]: I1213 01:29:43.489278 2582 scope.go:117] "RemoveContainer" containerID="3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068" Dec 13 01:29:43.490642 containerd[1462]: time="2024-12-13T01:29:43.490603311Z" level=info msg="RemoveContainer for \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\"" Dec 13 01:29:43.666331 containerd[1462]: time="2024-12-13T01:29:43.666271887Z" level=info msg="RemoveContainer for \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\" returns successfully" Dec 13 01:29:43.666724 kubelet[2582]: I1213 01:29:43.666681 2582 scope.go:117] "RemoveContainer" containerID="3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068" Dec 13 01:29:43.670111 containerd[1462]: time="2024-12-13T01:29:43.670046029Z" level=error msg="ContainerStatus for \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\": not found" Dec 13 01:29:43.670307 kubelet[2582]: E1213 01:29:43.670279 2582 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\": not found" containerID="3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068" Dec 13 01:29:43.670438 kubelet[2582]: I1213 01:29:43.670404 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068"} err="failed to get container status \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e6e36d8cf6e77e4fe2db44004d460d9fd02306770c72309a2871fac4298e068\": not found" Dec 13 01:29:43.670469 kubelet[2582]: I1213 01:29:43.670442 2582 scope.go:117] "RemoveContainer" containerID="9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628" Dec 13 01:29:43.671691 containerd[1462]: time="2024-12-13T01:29:43.671661214Z" level=info msg="RemoveContainer for \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\"" Dec 13 01:29:43.755717 containerd[1462]: time="2024-12-13T01:29:43.755538633Z" level=info msg="RemoveContainer for \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\" returns successfully" Dec 13 01:29:43.755853 kubelet[2582]: I1213 01:29:43.755814 2582 scope.go:117] "RemoveContainer" containerID="0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d" Dec 13 01:29:43.757395 containerd[1462]: time="2024-12-13T01:29:43.757360721Z" level=info msg="RemoveContainer for \"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d\"" Dec 13 01:29:43.850926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db-rootfs.mount: Deactivated successfully. Dec 13 01:29:43.851072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0781757b693fa7c5c3554077306c4143870983359d514a722b8f9d9c2e5533db-shm.mount: Deactivated successfully. Dec 13 01:29:43.851183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-548115c44674c5fcc16a63f1412b920d8e032ec8441f8ff300ebd5b083a82aac-rootfs.mount: Deactivated successfully. Dec 13 01:29:43.851290 systemd[1]: var-lib-kubelet-pods-74de468a\x2dfe4b\x2d48a9\x2d9e21\x2d580c7909b725-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz4pm4.mount: Deactivated successfully. Dec 13 01:29:43.851419 systemd[1]: var-lib-kubelet-pods-fd33d0e4\x2db9ac\x2d4403\x2d9125\x2ddf9c108452ae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkhh2z.mount: Deactivated successfully. Dec 13 01:29:43.851528 systemd[1]: var-lib-kubelet-pods-74de468a\x2dfe4b\x2d48a9\x2d9e21\x2d580c7909b725-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:29:43.851629 systemd[1]: var-lib-kubelet-pods-74de468a\x2dfe4b\x2d48a9\x2d9e21\x2d580c7909b725-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:29:43.885996 containerd[1462]: time="2024-12-13T01:29:43.885921402Z" level=info msg="RemoveContainer for \"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d\" returns successfully" Dec 13 01:29:43.886556 kubelet[2582]: I1213 01:29:43.886244 2582 scope.go:117] "RemoveContainer" containerID="94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9" Dec 13 01:29:43.887743 containerd[1462]: time="2024-12-13T01:29:43.887706119Z" level=info msg="RemoveContainer for \"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9\"" Dec 13 01:29:43.917335 containerd[1462]: time="2024-12-13T01:29:43.917286218Z" level=info msg="RemoveContainer for \"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9\" returns successfully" Dec 13 01:29:43.917636 kubelet[2582]: I1213 01:29:43.917572 2582 scope.go:117] "RemoveContainer" containerID="290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb" Dec 13 01:29:43.918881 containerd[1462]: time="2024-12-13T01:29:43.918852620Z" level=info msg="RemoveContainer for \"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb\"" Dec 13 01:29:43.922510 containerd[1462]: time="2024-12-13T01:29:43.922459486Z" level=info msg="RemoveContainer for \"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb\" returns successfully" Dec 13 01:29:43.922685 kubelet[2582]: I1213 01:29:43.922635 2582 scope.go:117] "RemoveContainer" containerID="7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf" Dec 13 01:29:43.923900 containerd[1462]: time="2024-12-13T01:29:43.923850716Z" level=info msg="RemoveContainer for \"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf\"" Dec 13 01:29:43.927465 containerd[1462]: time="2024-12-13T01:29:43.927432003Z" level=info msg="RemoveContainer for \"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf\" returns successfully" Dec 13 01:29:43.927653 kubelet[2582]: I1213 01:29:43.927619 2582 scope.go:117] "RemoveContainer" containerID="9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628" Dec 13 01:29:43.928015 containerd[1462]: time="2024-12-13T01:29:43.927970014Z" level=error msg="ContainerStatus for \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\": not found" Dec 13 01:29:43.928147 kubelet[2582]: E1213 01:29:43.928126 2582 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\": not found" containerID="9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628" Dec 13 01:29:43.928186 kubelet[2582]: I1213 01:29:43.928171 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628"} err="failed to get container status \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\": rpc error: code = NotFound desc = an error occurred when try to find container \"9eebc739249848480a95799ff25b64d988a8a4ffda92a7619fcca76a0c066628\": not found" Dec 13 01:29:43.928186 kubelet[2582]: I1213 01:29:43.928182 2582 scope.go:117] "RemoveContainer" containerID="0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d" Dec 13 01:29:43.928376 containerd[1462]: time="2024-12-13T01:29:43.928341749Z" level=error msg="ContainerStatus for \"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d\": not found" Dec 13 01:29:43.928478 kubelet[2582]: E1213 01:29:43.928447 2582 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d\": not found" containerID="0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d" Dec 13 01:29:43.928478 kubelet[2582]: I1213 01:29:43.928476 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d"} err="failed to get container status \"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b870d5ddba308bc99ed061e272fdd91bd4c866a33f54406361299444683948d\": not found" Dec 13 01:29:43.928565 kubelet[2582]: I1213 01:29:43.928487 2582 scope.go:117] "RemoveContainer" containerID="94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9" Dec 13 01:29:43.928685 containerd[1462]: time="2024-12-13T01:29:43.928619347Z" level=error msg="ContainerStatus for \"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9\": not found" Dec 13 01:29:43.928878 kubelet[2582]: E1213 01:29:43.928834 2582 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9\": not found" containerID="94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9" Dec 13 01:29:43.929070 kubelet[2582]: I1213 01:29:43.928922 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9"} err="failed to get container status \"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"94ade7c0520a4e86ed68e68c74f70ee210243ba1eb6d61142a6775bfeb7ce6e9\": not found" Dec 13 01:29:43.929070 kubelet[2582]: I1213 01:29:43.928959 2582 scope.go:117] "RemoveContainer" containerID="290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb" Dec 13 01:29:43.929218 containerd[1462]: time="2024-12-13T01:29:43.929184348Z" level=error msg="ContainerStatus for \"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb\": not found" Dec 13 01:29:43.929401 kubelet[2582]: E1213 01:29:43.929362 2582 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb\": not found" containerID="290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb" Dec 13 01:29:43.929565 kubelet[2582]: I1213 01:29:43.929418 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb"} err="failed to get container status \"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"290fec42fa41b2618847804b4320a93509ca21d820a4cf5fa2eddea75b7c96bb\": not found" Dec 13 01:29:43.929565 kubelet[2582]: I1213 01:29:43.929437 2582 scope.go:117] "RemoveContainer" containerID="7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf" Dec 13 01:29:43.929731 containerd[1462]: time="2024-12-13T01:29:43.929691221Z" level=error msg="ContainerStatus for \"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf\": not found" Dec 13 01:29:43.929910 kubelet[2582]: E1213 01:29:43.929866 2582 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf\": not found" containerID="7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf" Dec 13 01:29:43.929964 kubelet[2582]: I1213 01:29:43.929926 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf"} err="failed to get container status \"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf\": rpc error: code = NotFound desc = an error occurred when try to find container \"7217b051ce2d685540cecb44f2a0266972b6da071b8ad1bf8bb90ee886c34bcf\": not found" Dec 13 01:29:44.028461 kubelet[2582]: I1213 01:29:44.028301 2582 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="74de468a-fe4b-48a9-9e21-580c7909b725" path="/var/lib/kubelet/pods/74de468a-fe4b-48a9-9e21-580c7909b725/volumes" Dec 13 01:29:44.029449 kubelet[2582]: I1213 01:29:44.029415 2582 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fd33d0e4-b9ac-4403-9125-df9c108452ae" path="/var/lib/kubelet/pods/fd33d0e4-b9ac-4403-9125-df9c108452ae/volumes" Dec 13 01:29:44.785214 sshd[4269]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:44.797112 systemd[1]: sshd@26-10.0.0.47:22-10.0.0.1:54854.service: Deactivated successfully. Dec 13 01:29:44.799145 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:29:44.800839 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:29:44.802209 systemd[1]: Started sshd@27-10.0.0.47:22-10.0.0.1:54862.service - OpenSSH per-connection server daemon (10.0.0.1:54862). Dec 13 01:29:44.803101 systemd-logind[1443]: Removed session 27. Dec 13 01:29:44.852975 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 54862 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:44.854665 sshd[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:44.858791 systemd-logind[1443]: New session 28 of user core. Dec 13 01:29:44.869068 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:29:45.194204 kubelet[2582]: E1213 01:29:45.194080 2582 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:29:45.255814 sshd[4431]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:45.268622 kubelet[2582]: I1213 01:29:45.268558 2582 topology_manager.go:215] "Topology Admit Handler" podUID="5fddd4c8-9f5d-499c-a934-b3a8a621452e" podNamespace="kube-system" podName="cilium-bhsdj" Dec 13 01:29:45.268765 kubelet[2582]: E1213 01:29:45.268654 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74de468a-fe4b-48a9-9e21-580c7909b725" containerName="apply-sysctl-overwrites" Dec 13 01:29:45.268765 kubelet[2582]: E1213 01:29:45.268665 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74de468a-fe4b-48a9-9e21-580c7909b725" containerName="clean-cilium-state" Dec 13 01:29:45.268765 kubelet[2582]: E1213 01:29:45.268674 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fd33d0e4-b9ac-4403-9125-df9c108452ae" containerName="cilium-operator" Dec 13 01:29:45.268765 kubelet[2582]: E1213 01:29:45.268681 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74de468a-fe4b-48a9-9e21-580c7909b725" containerName="mount-cgroup" Dec 13 01:29:45.268765 kubelet[2582]: E1213 01:29:45.268688 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74de468a-fe4b-48a9-9e21-580c7909b725" containerName="mount-bpf-fs" Dec 13 01:29:45.268765 kubelet[2582]: E1213 01:29:45.268696 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74de468a-fe4b-48a9-9e21-580c7909b725" containerName="cilium-agent" Dec 13 01:29:45.268765 kubelet[2582]: I1213 01:29:45.268729 2582 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd33d0e4-b9ac-4403-9125-df9c108452ae" containerName="cilium-operator" Dec 13 01:29:45.268765 kubelet[2582]: I1213 01:29:45.268736 2582 memory_manager.go:354] "RemoveStaleState removing state" podUID="74de468a-fe4b-48a9-9e21-580c7909b725" containerName="cilium-agent" Dec 13 01:29:45.269290 systemd[1]: sshd@27-10.0.0.47:22-10.0.0.1:54862.service: Deactivated successfully. Dec 13 01:29:45.273980 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:29:45.278092 systemd-logind[1443]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:29:45.286517 systemd[1]: Started sshd@28-10.0.0.47:22-10.0.0.1:54864.service - OpenSSH per-connection server daemon (10.0.0.1:54864). Dec 13 01:29:45.291769 systemd-logind[1443]: Removed session 28. Dec 13 01:29:45.298052 systemd[1]: Created slice kubepods-burstable-pod5fddd4c8_9f5d_499c_a934_b3a8a621452e.slice - libcontainer container kubepods-burstable-pod5fddd4c8_9f5d_499c_a934_b3a8a621452e.slice. Dec 13 01:29:45.322134 sshd[4446]: Accepted publickey for core from 10.0.0.1 port 54864 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:45.323734 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:45.328223 systemd-logind[1443]: New session 29 of user core. Dec 13 01:29:45.342020 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:29:45.392625 sshd[4446]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:45.402864 systemd[1]: sshd@28-10.0.0.47:22-10.0.0.1:54864.service: Deactivated successfully. Dec 13 01:29:45.404987 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:29:45.406710 systemd-logind[1443]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:29:45.411132 systemd[1]: Started sshd@29-10.0.0.47:22-10.0.0.1:54866.service - OpenSSH per-connection server daemon (10.0.0.1:54866). Dec 13 01:29:45.412138 systemd-logind[1443]: Removed session 29. Dec 13 01:29:45.442631 sshd[4454]: Accepted publickey for core from 10.0.0.1 port 54866 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:29:45.444135 sshd[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:45.448122 systemd-logind[1443]: New session 30 of user core. Dec 13 01:29:45.449164 kubelet[2582]: I1213 01:29:45.449030 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5fddd4c8-9f5d-499c-a934-b3a8a621452e-hostproc\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449164 kubelet[2582]: I1213 01:29:45.449084 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5fddd4c8-9f5d-499c-a934-b3a8a621452e-cilium-cgroup\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449305 kubelet[2582]: I1213 01:29:45.449163 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5fddd4c8-9f5d-499c-a934-b3a8a621452e-cilium-run\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449305 kubelet[2582]: I1213 01:29:45.449209 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5fddd4c8-9f5d-499c-a934-b3a8a621452e-clustermesh-secrets\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449305 kubelet[2582]: I1213 01:29:45.449251 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fddd4c8-9f5d-499c-a934-b3a8a621452e-etc-cni-netd\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449305 kubelet[2582]: I1213 01:29:45.449275 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d59ff\" (UniqueName: \"kubernetes.io/projected/5fddd4c8-9f5d-499c-a934-b3a8a621452e-kube-api-access-d59ff\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449305 kubelet[2582]: I1213 01:29:45.449293 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5fddd4c8-9f5d-499c-a934-b3a8a621452e-cilium-config-path\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449483 kubelet[2582]: I1213 01:29:45.449312 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5fddd4c8-9f5d-499c-a934-b3a8a621452e-cni-path\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449483 kubelet[2582]: I1213 01:29:45.449331 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fddd4c8-9f5d-499c-a934-b3a8a621452e-xtables-lock\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449483 kubelet[2582]: I1213 01:29:45.449349 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5fddd4c8-9f5d-499c-a934-b3a8a621452e-bpf-maps\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449483 kubelet[2582]: I1213 01:29:45.449367 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5fddd4c8-9f5d-499c-a934-b3a8a621452e-host-proc-sys-kernel\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449483 kubelet[2582]: I1213 01:29:45.449385 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5fddd4c8-9f5d-499c-a934-b3a8a621452e-hubble-tls\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449483 kubelet[2582]: I1213 01:29:45.449402 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fddd4c8-9f5d-499c-a934-b3a8a621452e-lib-modules\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449684 kubelet[2582]: I1213 01:29:45.449431 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5fddd4c8-9f5d-499c-a934-b3a8a621452e-cilium-ipsec-secrets\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.449684 kubelet[2582]: I1213 01:29:45.449450 2582 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5fddd4c8-9f5d-499c-a934-b3a8a621452e-host-proc-sys-net\") pod \"cilium-bhsdj\" (UID: \"5fddd4c8-9f5d-499c-a934-b3a8a621452e\") " pod="kube-system/cilium-bhsdj" Dec 13 01:29:45.457021 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 01:29:45.604805 kubelet[2582]: E1213 01:29:45.604769 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:45.605562 containerd[1462]: time="2024-12-13T01:29:45.605423816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bhsdj,Uid:5fddd4c8-9f5d-499c-a934-b3a8a621452e,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:45.629539 containerd[1462]: time="2024-12-13T01:29:45.629407740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:45.629539 containerd[1462]: time="2024-12-13T01:29:45.629498002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:45.629539 containerd[1462]: time="2024-12-13T01:29:45.629512218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:45.629715 containerd[1462]: time="2024-12-13T01:29:45.629612069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:45.651060 systemd[1]: Started cri-containerd-4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372.scope - libcontainer container 4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372. Dec 13 01:29:45.675672 containerd[1462]: time="2024-12-13T01:29:45.675603308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bhsdj,Uid:5fddd4c8-9f5d-499c-a934-b3a8a621452e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372\"" Dec 13 01:29:45.676434 kubelet[2582]: E1213 01:29:45.676406 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:45.678426 containerd[1462]: time="2024-12-13T01:29:45.678394343Z" level=info msg="CreateContainer within sandbox \"4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:29:45.694007 containerd[1462]: time="2024-12-13T01:29:45.693955086Z" level=info msg="CreateContainer within sandbox \"4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"05260ecd5b15cdc1e047559767ceb19944ea7c33a2d2a285df597de24ffcd4f4\"" Dec 13 01:29:45.694548 containerd[1462]: time="2024-12-13T01:29:45.694467238Z" level=info msg="StartContainer for \"05260ecd5b15cdc1e047559767ceb19944ea7c33a2d2a285df597de24ffcd4f4\"" Dec 13 01:29:45.722050 systemd[1]: Started cri-containerd-05260ecd5b15cdc1e047559767ceb19944ea7c33a2d2a285df597de24ffcd4f4.scope - libcontainer container 05260ecd5b15cdc1e047559767ceb19944ea7c33a2d2a285df597de24ffcd4f4. Dec 13 01:29:45.748521 containerd[1462]: time="2024-12-13T01:29:45.748469938Z" level=info msg="StartContainer for \"05260ecd5b15cdc1e047559767ceb19944ea7c33a2d2a285df597de24ffcd4f4\" returns successfully" Dec 13 01:29:45.760118 systemd[1]: cri-containerd-05260ecd5b15cdc1e047559767ceb19944ea7c33a2d2a285df597de24ffcd4f4.scope: Deactivated successfully. Dec 13 01:29:45.791109 containerd[1462]: time="2024-12-13T01:29:45.791036670Z" level=info msg="shim disconnected" id=05260ecd5b15cdc1e047559767ceb19944ea7c33a2d2a285df597de24ffcd4f4 namespace=k8s.io Dec 13 01:29:45.791109 containerd[1462]: time="2024-12-13T01:29:45.791104509Z" level=warning msg="cleaning up after shim disconnected" id=05260ecd5b15cdc1e047559767ceb19944ea7c33a2d2a285df597de24ffcd4f4 namespace=k8s.io Dec 13 01:29:45.791109 containerd[1462]: time="2024-12-13T01:29:45.791113165Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:46.489709 kubelet[2582]: E1213 01:29:46.489673 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:46.492388 containerd[1462]: time="2024-12-13T01:29:46.492266790Z" level=info msg="CreateContainer within sandbox \"4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:29:46.505680 containerd[1462]: time="2024-12-13T01:29:46.505612196Z" level=info msg="CreateContainer within sandbox \"4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"45e861db5e7b20c296ecf95338e7d287f67d2a2d481f5587d8552340ac6324f7\"" Dec 13 01:29:46.506256 containerd[1462]: time="2024-12-13T01:29:46.506221442Z" level=info msg="StartContainer for \"45e861db5e7b20c296ecf95338e7d287f67d2a2d481f5587d8552340ac6324f7\"" Dec 13 01:29:46.542019 systemd[1]: Started cri-containerd-45e861db5e7b20c296ecf95338e7d287f67d2a2d481f5587d8552340ac6324f7.scope - libcontainer container 45e861db5e7b20c296ecf95338e7d287f67d2a2d481f5587d8552340ac6324f7. Dec 13 01:29:46.570722 containerd[1462]: time="2024-12-13T01:29:46.570677558Z" level=info msg="StartContainer for \"45e861db5e7b20c296ecf95338e7d287f67d2a2d481f5587d8552340ac6324f7\" returns successfully" Dec 13 01:29:46.579929 systemd[1]: cri-containerd-45e861db5e7b20c296ecf95338e7d287f67d2a2d481f5587d8552340ac6324f7.scope: Deactivated successfully. Dec 13 01:29:46.601455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45e861db5e7b20c296ecf95338e7d287f67d2a2d481f5587d8552340ac6324f7-rootfs.mount: Deactivated successfully. Dec 13 01:29:46.605538 containerd[1462]: time="2024-12-13T01:29:46.605469920Z" level=info msg="shim disconnected" id=45e861db5e7b20c296ecf95338e7d287f67d2a2d481f5587d8552340ac6324f7 namespace=k8s.io Dec 13 01:29:46.605538 containerd[1462]: time="2024-12-13T01:29:46.605525315Z" level=warning msg="cleaning up after shim disconnected" id=45e861db5e7b20c296ecf95338e7d287f67d2a2d481f5587d8552340ac6324f7 namespace=k8s.io Dec 13 01:29:46.605538 containerd[1462]: time="2024-12-13T01:29:46.605533420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:47.492719 kubelet[2582]: E1213 01:29:47.492676 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:47.495127 containerd[1462]: time="2024-12-13T01:29:47.495082107Z" level=info msg="CreateContainer within sandbox \"4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:29:47.511615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273281149.mount: Deactivated successfully. Dec 13 01:29:47.520031 containerd[1462]: time="2024-12-13T01:29:47.519961928Z" level=info msg="CreateContainer within sandbox \"4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc2d77edbfc15dd06d95c655b5fe74b1e7c87fce79677a1939c6b476c7ca44da\"" Dec 13 01:29:47.520835 containerd[1462]: time="2024-12-13T01:29:47.520805929Z" level=info msg="StartContainer for \"bc2d77edbfc15dd06d95c655b5fe74b1e7c87fce79677a1939c6b476c7ca44da\"" Dec 13 01:29:47.550067 systemd[1]: Started cri-containerd-bc2d77edbfc15dd06d95c655b5fe74b1e7c87fce79677a1939c6b476c7ca44da.scope - libcontainer container bc2d77edbfc15dd06d95c655b5fe74b1e7c87fce79677a1939c6b476c7ca44da. Dec 13 01:29:47.578765 containerd[1462]: time="2024-12-13T01:29:47.578695904Z" level=info msg="StartContainer for \"bc2d77edbfc15dd06d95c655b5fe74b1e7c87fce79677a1939c6b476c7ca44da\" returns successfully" Dec 13 01:29:47.580132 systemd[1]: cri-containerd-bc2d77edbfc15dd06d95c655b5fe74b1e7c87fce79677a1939c6b476c7ca44da.scope: Deactivated successfully. Dec 13 01:29:47.607612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc2d77edbfc15dd06d95c655b5fe74b1e7c87fce79677a1939c6b476c7ca44da-rootfs.mount: Deactivated successfully. Dec 13 01:29:47.613055 containerd[1462]: time="2024-12-13T01:29:47.612988620Z" level=info msg="shim disconnected" id=bc2d77edbfc15dd06d95c655b5fe74b1e7c87fce79677a1939c6b476c7ca44da namespace=k8s.io Dec 13 01:29:47.613403 containerd[1462]: time="2024-12-13T01:29:47.613060306Z" level=warning msg="cleaning up after shim disconnected" id=bc2d77edbfc15dd06d95c655b5fe74b1e7c87fce79677a1939c6b476c7ca44da namespace=k8s.io Dec 13 01:29:47.613403 containerd[1462]: time="2024-12-13T01:29:47.613073982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:48.511568 kubelet[2582]: E1213 01:29:48.511515 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:48.514057 containerd[1462]: time="2024-12-13T01:29:48.513596974Z" level=info msg="CreateContainer within sandbox \"4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:29:48.755636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149355725.mount: Deactivated successfully. Dec 13 01:29:48.816701 containerd[1462]: time="2024-12-13T01:29:48.816520129Z" level=info msg="CreateContainer within sandbox \"4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"31ae1748cb6ebc9b489e3373b0d5d960afa54a1127dfba004a9852a40f2b19ec\"" Dec 13 01:29:48.817450 containerd[1462]: time="2024-12-13T01:29:48.817399036Z" level=info msg="StartContainer for \"31ae1748cb6ebc9b489e3373b0d5d960afa54a1127dfba004a9852a40f2b19ec\"" Dec 13 01:29:48.854069 systemd[1]: Started cri-containerd-31ae1748cb6ebc9b489e3373b0d5d960afa54a1127dfba004a9852a40f2b19ec.scope - libcontainer container 31ae1748cb6ebc9b489e3373b0d5d960afa54a1127dfba004a9852a40f2b19ec. Dec 13 01:29:48.879790 systemd[1]: cri-containerd-31ae1748cb6ebc9b489e3373b0d5d960afa54a1127dfba004a9852a40f2b19ec.scope: Deactivated successfully. Dec 13 01:29:48.883196 containerd[1462]: time="2024-12-13T01:29:48.883058782Z" level=info msg="StartContainer for \"31ae1748cb6ebc9b489e3373b0d5d960afa54a1127dfba004a9852a40f2b19ec\" returns successfully" Dec 13 01:29:48.904733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31ae1748cb6ebc9b489e3373b0d5d960afa54a1127dfba004a9852a40f2b19ec-rootfs.mount: Deactivated successfully. Dec 13 01:29:48.909106 containerd[1462]: time="2024-12-13T01:29:48.909022523Z" level=info msg="shim disconnected" id=31ae1748cb6ebc9b489e3373b0d5d960afa54a1127dfba004a9852a40f2b19ec namespace=k8s.io Dec 13 01:29:48.909106 containerd[1462]: time="2024-12-13T01:29:48.909093078Z" level=warning msg="cleaning up after shim disconnected" id=31ae1748cb6ebc9b489e3373b0d5d960afa54a1127dfba004a9852a40f2b19ec namespace=k8s.io Dec 13 01:29:48.909106 containerd[1462]: time="2024-12-13T01:29:48.909101834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:49.518682 kubelet[2582]: E1213 01:29:49.518633 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:49.521772 containerd[1462]: time="2024-12-13T01:29:49.520865201Z" level=info msg="CreateContainer within sandbox \"4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:29:49.538311 containerd[1462]: time="2024-12-13T01:29:49.538252943Z" level=info msg="CreateContainer within sandbox \"4c0973b5645bf9543023b7910fdefce78c818da8ba9c66ca24eb43cdbc818372\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea52467141969766f68f61249f0702d82eab2cee9532069d7dedccf926834c1b\"" Dec 13 01:29:49.540067 containerd[1462]: time="2024-12-13T01:29:49.539147178Z" level=info msg="StartContainer for \"ea52467141969766f68f61249f0702d82eab2cee9532069d7dedccf926834c1b\"" Dec 13 01:29:49.569084 systemd[1]: Started cri-containerd-ea52467141969766f68f61249f0702d82eab2cee9532069d7dedccf926834c1b.scope - libcontainer container ea52467141969766f68f61249f0702d82eab2cee9532069d7dedccf926834c1b. Dec 13 01:29:49.605711 containerd[1462]: time="2024-12-13T01:29:49.605649376Z" level=info msg="StartContainer for \"ea52467141969766f68f61249f0702d82eab2cee9532069d7dedccf926834c1b\" returns successfully" Dec 13 01:29:50.063945 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:29:50.524000 kubelet[2582]: E1213 01:29:50.523970 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:50.537662 kubelet[2582]: I1213 01:29:50.537597 2582 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bhsdj" podStartSLOduration=5.537485127 podStartE2EDuration="5.537485127s" podCreationTimestamp="2024-12-13 01:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:50.537426967 +0000 UTC m=+110.666722963" watchObservedRunningTime="2024-12-13 01:29:50.537485127 +0000 UTC m=+110.666781123" Dec 13 01:29:51.606302 kubelet[2582]: E1213 01:29:51.606263 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:53.137083 systemd-networkd[1406]: lxc_health: Link UP Dec 13 01:29:53.144937 systemd-networkd[1406]: lxc_health: Gained carrier Dec 13 01:29:53.606929 kubelet[2582]: E1213 01:29:53.606881 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:54.531483 kubelet[2582]: E1213 01:29:54.531447 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:54.862105 systemd-networkd[1406]: lxc_health: Gained IPv6LL Dec 13 01:29:55.533147 kubelet[2582]: E1213 01:29:55.533107 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:56.104855 kubelet[2582]: E1213 01:29:56.104810 2582 upgradeaware.go:439] Error proxying data from backend to client: read tcp 127.0.0.1:51990->127.0.0.1:46067: read: connection reset by peer Dec 13 01:29:58.334928 sshd[4454]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:58.343163 systemd-logind[1443]: Session 30 logged out. Waiting for processes to exit. Dec 13 01:29:58.344803 systemd[1]: sshd@29-10.0.0.47:22-10.0.0.1:54866.service: Deactivated successfully. Dec 13 01:29:58.348565 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 01:29:58.350341 systemd-logind[1443]: Removed session 30.