Nov 12 22:40:36.994678 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 21:10:03 -00 2024 Nov 12 22:40:36.994716 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:40:36.994732 kernel: BIOS-provided physical RAM map: Nov 12 22:40:36.994740 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 22:40:36.994748 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 12 22:40:36.994756 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 12 22:40:36.994766 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 12 22:40:36.994775 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 12 22:40:36.994784 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 12 22:40:36.994792 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 12 22:40:36.994809 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 12 22:40:36.994817 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 12 22:40:36.994826 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 12 22:40:36.994835 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 12 22:40:36.994845 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 12 22:40:36.994858 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 12 22:40:36.994879 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Nov 12 22:40:36.994888 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Nov 12 22:40:36.994897 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Nov 12 22:40:36.994906 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Nov 12 22:40:36.994915 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 12 22:40:36.994924 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 12 22:40:36.994933 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 12 22:40:36.994942 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 22:40:36.994951 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 12 22:40:36.994960 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 12 22:40:36.994969 kernel: NX (Execute Disable) protection: active Nov 12 22:40:36.994982 kernel: APIC: Static calls initialized Nov 12 22:40:36.994991 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Nov 12 22:40:36.995000 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Nov 12 22:40:36.995009 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Nov 12 22:40:36.995018 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Nov 12 22:40:36.995026 kernel: extended physical RAM map: Nov 12 22:40:36.995035 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 22:40:36.995044 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 12 22:40:36.995068 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 12 22:40:36.995077 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 12 22:40:36.995086 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 12 22:40:36.995099 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 12 22:40:36.995109 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 12 22:40:36.995128 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Nov 12 22:40:36.995138 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Nov 12 22:40:36.995148 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Nov 12 22:40:36.995158 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Nov 12 22:40:36.995168 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Nov 12 22:40:36.995181 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 12 22:40:36.995191 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 12 22:40:36.995202 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 12 22:40:36.995212 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 12 22:40:36.995221 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 12 22:40:36.995231 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Nov 12 22:40:36.995242 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Nov 12 22:40:36.995251 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Nov 12 22:40:36.995262 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Nov 12 22:40:36.995275 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 12 22:40:36.995285 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 12 22:40:36.995295 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 12 22:40:36.995309 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 22:40:36.995319 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 12 22:40:36.995329 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 12 22:40:36.995342 kernel: efi: EFI v2.7 by EDK II Nov 12 22:40:36.995352 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Nov 12 22:40:36.995362 kernel: random: crng init done Nov 12 22:40:36.995372 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 12 22:40:36.995382 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 12 22:40:36.995395 kernel: secureboot: Secure boot disabled Nov 12 22:40:36.995405 kernel: SMBIOS 2.8 present. Nov 12 22:40:36.995419 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 12 22:40:36.995429 kernel: Hypervisor detected: KVM Nov 12 22:40:36.995438 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 22:40:36.995449 kernel: kvm-clock: using sched offset of 3804162503 cycles Nov 12 22:40:36.995460 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 22:40:36.995470 kernel: tsc: Detected 2794.746 MHz processor Nov 12 22:40:36.995481 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 22:40:36.995491 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 22:40:36.995505 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 12 22:40:36.995515 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 22:40:36.995526 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 22:40:36.995536 kernel: Using GB pages for direct mapping Nov 12 22:40:36.995554 kernel: ACPI: Early table checksum verification disabled Nov 12 22:40:36.995564 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 12 22:40:36.995575 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 12 22:40:36.995585 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:40:36.995596 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:40:36.995610 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 12 22:40:36.995621 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:40:36.995631 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:40:36.995641 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:40:36.995652 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:40:36.995662 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 12 22:40:36.995672 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 12 22:40:36.995682 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Nov 12 22:40:36.995693 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 12 22:40:36.995706 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 12 22:40:36.995717 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 12 22:40:36.995726 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 12 22:40:36.995736 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 12 22:40:36.995746 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 12 22:40:36.995756 kernel: No NUMA configuration found Nov 12 22:40:36.995773 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 12 22:40:36.995786 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Nov 12 22:40:36.995796 kernel: Zone ranges: Nov 12 22:40:36.995810 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 22:40:36.995820 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 12 22:40:36.995834 kernel: Normal empty Nov 12 22:40:36.995844 kernel: Movable zone start for each node Nov 12 22:40:36.995854 kernel: Early memory node ranges Nov 12 22:40:36.995863 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 22:40:36.995882 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 12 22:40:36.995892 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 12 22:40:36.995902 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 12 22:40:36.995911 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 12 22:40:36.995925 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 12 22:40:36.995935 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Nov 12 22:40:36.995945 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Nov 12 22:40:36.995955 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 12 22:40:36.995968 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 22:40:36.995978 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 22:40:36.996000 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 12 22:40:36.996013 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 22:40:36.996024 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 12 22:40:36.996035 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 12 22:40:36.996045 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 12 22:40:36.996073 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 12 22:40:36.996104 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 12 22:40:36.996121 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 22:40:36.996131 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 22:40:36.996141 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 22:40:36.996156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 22:40:36.996166 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 22:40:36.996177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 22:40:36.996187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 22:40:36.996197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 22:40:36.996208 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 22:40:36.996218 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 22:40:36.996229 kernel: TSC deadline timer available Nov 12 22:40:36.996240 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 22:40:36.996255 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 22:40:36.996265 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 22:40:36.996276 kernel: kvm-guest: setup PV sched yield Nov 12 22:40:36.996286 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 12 22:40:36.996297 kernel: Booting paravirtualized kernel on KVM Nov 12 22:40:36.996311 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 22:40:36.996322 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 22:40:36.996333 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 22:40:36.996344 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 22:40:36.996359 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 22:40:36.996369 kernel: kvm-guest: PV spinlocks enabled Nov 12 22:40:36.996379 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 22:40:36.996392 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:40:36.996407 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 22:40:36.996418 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 22:40:36.996428 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 22:40:36.996439 kernel: Fallback order for Node 0: 0 Nov 12 22:40:36.996450 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Nov 12 22:40:36.996465 kernel: Policy zone: DMA32 Nov 12 22:40:36.996476 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 22:40:36.996487 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2305K rwdata, 22736K rodata, 42968K init, 2220K bss, 175776K reserved, 0K cma-reserved) Nov 12 22:40:36.996498 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 22:40:36.996509 kernel: ftrace: allocating 37801 entries in 148 pages Nov 12 22:40:36.996519 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 22:40:36.996530 kernel: Dynamic Preempt: voluntary Nov 12 22:40:36.996541 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 22:40:36.996553 kernel: rcu: RCU event tracing is enabled. Nov 12 22:40:36.996569 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 22:40:36.996580 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 22:40:36.996591 kernel: Rude variant of Tasks RCU enabled. Nov 12 22:40:36.996602 kernel: Tracing variant of Tasks RCU enabled. Nov 12 22:40:36.996612 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 22:40:36.996623 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 22:40:36.996634 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 22:40:36.996645 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 22:40:36.996656 kernel: Console: colour dummy device 80x25 Nov 12 22:40:36.996672 kernel: printk: console [ttyS0] enabled Nov 12 22:40:36.996683 kernel: ACPI: Core revision 20230628 Nov 12 22:40:36.996694 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 22:40:36.996706 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 22:40:36.996717 kernel: x2apic enabled Nov 12 22:40:36.996728 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 22:40:36.996739 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 22:40:36.996753 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 22:40:36.996764 kernel: kvm-guest: setup PV IPIs Nov 12 22:40:36.996779 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 22:40:36.996790 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 22:40:36.996802 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Nov 12 22:40:36.996813 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 22:40:36.996824 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 22:40:36.996835 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 22:40:36.996846 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 22:40:36.996857 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 22:40:36.996878 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 22:40:36.996893 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 22:40:36.996904 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 22:40:36.996915 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 22:40:36.996925 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 22:40:36.996936 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 22:40:36.996947 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 22:40:36.996962 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 22:40:36.996973 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 22:40:36.996989 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 22:40:36.997000 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 22:40:36.997011 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 22:40:36.997022 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 22:40:36.997033 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 22:40:36.997044 kernel: Freeing SMP alternatives memory: 32K Nov 12 22:40:36.997080 kernel: pid_max: default: 32768 minimum: 301 Nov 12 22:40:36.997091 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 22:40:36.997102 kernel: landlock: Up and running. Nov 12 22:40:36.997125 kernel: SELinux: Initializing. Nov 12 22:40:36.997139 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:40:36.997150 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:40:36.997161 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 22:40:36.997172 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:40:36.997183 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:40:36.997194 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:40:36.997205 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 22:40:36.997216 kernel: ... version: 0 Nov 12 22:40:36.997231 kernel: ... bit width: 48 Nov 12 22:40:36.997242 kernel: ... generic registers: 6 Nov 12 22:40:36.997253 kernel: ... value mask: 0000ffffffffffff Nov 12 22:40:36.997264 kernel: ... max period: 00007fffffffffff Nov 12 22:40:36.997274 kernel: ... fixed-purpose events: 0 Nov 12 22:40:36.997285 kernel: ... event mask: 000000000000003f Nov 12 22:40:36.997296 kernel: signal: max sigframe size: 1776 Nov 12 22:40:36.997306 kernel: rcu: Hierarchical SRCU implementation. Nov 12 22:40:36.997318 kernel: rcu: Max phase no-delay instances is 400. Nov 12 22:40:36.997333 kernel: smp: Bringing up secondary CPUs ... Nov 12 22:40:36.997343 kernel: smpboot: x86: Booting SMP configuration: Nov 12 22:40:36.997354 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 22:40:36.997365 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 22:40:36.997376 kernel: smpboot: Max logical packages: 1 Nov 12 22:40:36.997387 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Nov 12 22:40:36.997398 kernel: devtmpfs: initialized Nov 12 22:40:36.997408 kernel: x86/mm: Memory block size: 128MB Nov 12 22:40:36.997419 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 12 22:40:36.997434 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 12 22:40:36.997445 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 12 22:40:36.997456 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 12 22:40:36.997467 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Nov 12 22:40:36.997478 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 12 22:40:36.997489 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 22:40:36.997507 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 22:40:36.997518 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 22:40:36.997528 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 22:40:36.997543 kernel: audit: initializing netlink subsys (disabled) Nov 12 22:40:36.997553 kernel: audit: type=2000 audit(1731451236.635:1): state=initialized audit_enabled=0 res=1 Nov 12 22:40:36.997563 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 22:40:36.997573 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 22:40:36.997583 kernel: cpuidle: using governor menu Nov 12 22:40:36.997597 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 22:40:36.997607 kernel: dca service started, version 1.12.1 Nov 12 22:40:36.997617 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 12 22:40:36.997627 kernel: PCI: Using configuration type 1 for base access Nov 12 22:40:36.997641 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 22:40:36.997652 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 22:40:36.997662 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 22:40:36.997673 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 22:40:36.997683 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 22:40:36.997693 kernel: ACPI: Added _OSI(Module Device) Nov 12 22:40:36.997704 kernel: ACPI: Added _OSI(Processor Device) Nov 12 22:40:36.997714 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 22:40:36.997724 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 22:40:36.997738 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 22:40:36.997749 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 22:40:36.997759 kernel: ACPI: Interpreter enabled Nov 12 22:40:36.997769 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 22:40:36.997780 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 22:40:36.997791 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 22:40:36.997801 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 22:40:36.997811 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 22:40:36.997822 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 22:40:36.998109 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 22:40:36.998303 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 22:40:36.998453 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 22:40:36.998465 kernel: PCI host bridge to bus 0000:00 Nov 12 22:40:36.998613 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 22:40:36.998754 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 22:40:36.998926 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 22:40:36.999045 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 12 22:40:36.999214 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 12 22:40:36.999332 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 12 22:40:36.999447 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 22:40:36.999612 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 22:40:36.999762 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 22:40:36.999924 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 12 22:40:37.000151 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 12 22:40:37.000295 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 12 22:40:37.000424 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 12 22:40:37.000575 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 22:40:37.000766 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 22:40:37.000911 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 12 22:40:37.001039 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 12 22:40:37.001202 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Nov 12 22:40:37.001349 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 22:40:37.001478 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 12 22:40:37.001624 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 12 22:40:37.001778 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Nov 12 22:40:37.001944 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 22:40:37.002104 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 12 22:40:37.002252 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 12 22:40:37.002380 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 12 22:40:37.002509 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 12 22:40:37.002661 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 22:40:37.002791 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 22:40:37.002952 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 22:40:37.003134 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 12 22:40:37.003266 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 12 22:40:37.003433 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 22:40:37.003579 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 12 22:40:37.003591 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 22:40:37.003599 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 22:40:37.003627 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 22:40:37.003642 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 22:40:37.003658 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 22:40:37.003666 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 22:40:37.003674 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 22:40:37.003682 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 22:40:37.003689 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 22:40:37.003697 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 22:40:37.003704 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 22:40:37.003716 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 22:40:37.003724 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 22:40:37.003734 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 22:40:37.003742 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 22:40:37.003749 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 22:40:37.003757 kernel: iommu: Default domain type: Translated Nov 12 22:40:37.003765 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 22:40:37.003772 kernel: efivars: Registered efivars operations Nov 12 22:40:37.003780 kernel: PCI: Using ACPI for IRQ routing Nov 12 22:40:37.003790 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 22:40:37.003798 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 12 22:40:37.003805 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 12 22:40:37.003813 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Nov 12 22:40:37.003820 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Nov 12 22:40:37.003828 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 12 22:40:37.003835 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 12 22:40:37.003843 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Nov 12 22:40:37.003851 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 12 22:40:37.003995 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 22:40:37.004255 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 22:40:37.004386 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 22:40:37.004396 kernel: vgaarb: loaded Nov 12 22:40:37.004404 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 22:40:37.004412 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 22:40:37.004420 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 22:40:37.004427 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 22:40:37.004441 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 22:40:37.004449 kernel: pnp: PnP ACPI init Nov 12 22:40:37.004606 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 12 22:40:37.004618 kernel: pnp: PnP ACPI: found 6 devices Nov 12 22:40:37.004626 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 22:40:37.004634 kernel: NET: Registered PF_INET protocol family Nov 12 22:40:37.004669 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 22:40:37.004683 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 22:40:37.004707 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 22:40:37.004717 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 22:40:37.004729 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 22:40:37.004740 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 22:40:37.004751 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:40:37.004762 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:40:37.004773 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 22:40:37.004785 kernel: NET: Registered PF_XDP protocol family Nov 12 22:40:37.004975 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 12 22:40:37.005174 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 12 22:40:37.005412 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 22:40:37.005573 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 22:40:37.005807 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 22:40:37.006047 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 12 22:40:37.006367 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 12 22:40:37.006532 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 12 22:40:37.006558 kernel: PCI: CLS 0 bytes, default 64 Nov 12 22:40:37.006578 kernel: Initialise system trusted keyrings Nov 12 22:40:37.006596 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 22:40:37.006616 kernel: Key type asymmetric registered Nov 12 22:40:37.006628 kernel: Asymmetric key parser 'x509' registered Nov 12 22:40:37.006638 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 22:40:37.006650 kernel: io scheduler mq-deadline registered Nov 12 22:40:37.006661 kernel: io scheduler kyber registered Nov 12 22:40:37.006673 kernel: io scheduler bfq registered Nov 12 22:40:37.006684 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 22:40:37.006701 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 22:40:37.006713 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 22:40:37.006728 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 22:40:37.006754 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 22:40:37.006765 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 22:40:37.006777 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 22:40:37.006793 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 22:40:37.006805 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 22:40:37.007048 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 22:40:37.007285 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 22:40:37.007304 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 22:40:37.007450 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T22:40:36 UTC (1731451236) Nov 12 22:40:37.007600 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 12 22:40:37.007622 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 22:40:37.007633 kernel: efifb: probing for efifb Nov 12 22:40:37.007645 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 12 22:40:37.007655 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 12 22:40:37.007666 kernel: efifb: scrolling: redraw Nov 12 22:40:37.007677 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 12 22:40:37.007688 kernel: Console: switching to colour frame buffer device 160x50 Nov 12 22:40:37.007699 kernel: fb0: EFI VGA frame buffer device Nov 12 22:40:37.007710 kernel: pstore: Using crash dump compression: deflate Nov 12 22:40:37.007721 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 22:40:37.007739 kernel: NET: Registered PF_INET6 protocol family Nov 12 22:40:37.007750 kernel: Segment Routing with IPv6 Nov 12 22:40:37.007761 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 22:40:37.007772 kernel: NET: Registered PF_PACKET protocol family Nov 12 22:40:37.007783 kernel: Key type dns_resolver registered Nov 12 22:40:37.007794 kernel: IPI shorthand broadcast: enabled Nov 12 22:40:37.007805 kernel: sched_clock: Marking stable (1390003381, 193889149)->(1644445507, -60552977) Nov 12 22:40:37.007816 kernel: registered taskstats version 1 Nov 12 22:40:37.007827 kernel: Loading compiled-in X.509 certificates Nov 12 22:40:37.007842 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: d04cb2ddbd5c3ca82936c51f5645ef0dcbdcd3b4' Nov 12 22:40:37.007853 kernel: Key type .fscrypt registered Nov 12 22:40:37.007863 kernel: Key type fscrypt-provisioning registered Nov 12 22:40:37.007886 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 22:40:37.007897 kernel: ima: Allocated hash algorithm: sha1 Nov 12 22:40:37.007909 kernel: ima: No architecture policies found Nov 12 22:40:37.007919 kernel: clk: Disabling unused clocks Nov 12 22:40:37.007930 kernel: Freeing unused kernel image (initmem) memory: 42968K Nov 12 22:40:37.007946 kernel: Write protecting the kernel read-only data: 36864k Nov 12 22:40:37.007957 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Nov 12 22:40:37.007967 kernel: Run /init as init process Nov 12 22:40:37.007977 kernel: with arguments: Nov 12 22:40:37.007988 kernel: /init Nov 12 22:40:37.007999 kernel: with environment: Nov 12 22:40:37.008009 kernel: HOME=/ Nov 12 22:40:37.008020 kernel: TERM=linux Nov 12 22:40:37.008031 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 22:40:37.008045 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:40:37.008077 systemd[1]: Detected virtualization kvm. Nov 12 22:40:37.008089 systemd[1]: Detected architecture x86-64. Nov 12 22:40:37.008100 systemd[1]: Running in initrd. Nov 12 22:40:37.008111 systemd[1]: No hostname configured, using default hostname. Nov 12 22:40:37.008123 systemd[1]: Hostname set to . Nov 12 22:40:37.008135 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:40:37.008146 systemd[1]: Queued start job for default target initrd.target. Nov 12 22:40:37.008161 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:40:37.008173 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:40:37.008185 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 22:40:37.008197 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:40:37.008208 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 22:40:37.008220 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 22:40:37.008233 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 22:40:37.008249 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 22:40:37.008261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:40:37.008273 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:40:37.008285 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:40:37.008297 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:40:37.008308 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:40:37.008320 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:40:37.008332 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:40:37.008348 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:40:37.008360 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 22:40:37.008372 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 22:40:37.008383 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:40:37.008395 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:40:37.008407 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:40:37.008418 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:40:37.008436 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 22:40:37.008451 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:40:37.008462 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 22:40:37.008473 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 22:40:37.008484 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:40:37.008495 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:40:37.008506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:40:37.008518 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 22:40:37.008529 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:40:37.008543 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 22:40:37.008561 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:40:37.008608 systemd-journald[195]: Collecting audit messages is disabled. Nov 12 22:40:37.008639 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:40:37.008651 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:40:37.008664 systemd-journald[195]: Journal started Nov 12 22:40:37.008689 systemd-journald[195]: Runtime Journal (/run/log/journal/d9bde27576e54cb4af705a06ae44760b) is 6.0M, max 48.3M, 42.2M free. Nov 12 22:40:37.040274 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 22:40:36.992548 systemd-modules-load[196]: Inserted module 'overlay' Nov 12 22:40:37.042512 kernel: Bridge firewalling registered Nov 12 22:40:37.042531 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:40:37.042399 systemd-modules-load[196]: Inserted module 'br_netfilter' Nov 12 22:40:37.049471 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:40:37.051884 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:40:37.052528 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:40:37.059775 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:40:37.063311 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:40:37.086404 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 22:40:37.090172 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:40:37.093600 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:40:37.099963 dracut-cmdline[222]: dracut-dracut-053 Nov 12 22:40:37.102789 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:40:37.110164 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:40:37.116906 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:40:37.128273 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:40:37.185758 systemd-resolved[255]: Positive Trust Anchors: Nov 12 22:40:37.185783 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:40:37.185827 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:40:37.191592 systemd-resolved[255]: Defaulting to hostname 'linux'. Nov 12 22:40:37.193112 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:40:37.197979 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:40:37.205086 kernel: SCSI subsystem initialized Nov 12 22:40:37.214141 kernel: Loading iSCSI transport class v2.0-870. Nov 12 22:40:37.229109 kernel: iscsi: registered transport (tcp) Nov 12 22:40:37.257235 kernel: iscsi: registered transport (qla4xxx) Nov 12 22:40:37.257332 kernel: QLogic iSCSI HBA Driver Nov 12 22:40:37.317000 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 22:40:37.330249 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 22:40:37.359139 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 22:40:37.359231 kernel: device-mapper: uevent: version 1.0.3 Nov 12 22:40:37.359245 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 22:40:37.406139 kernel: raid6: avx2x4 gen() 18952 MB/s Nov 12 22:40:37.423114 kernel: raid6: avx2x2 gen() 23860 MB/s Nov 12 22:40:37.440269 kernel: raid6: avx2x1 gen() 23308 MB/s Nov 12 22:40:37.440378 kernel: raid6: using algorithm avx2x2 gen() 23860 MB/s Nov 12 22:40:37.458388 kernel: raid6: .... xor() 19329 MB/s, rmw enabled Nov 12 22:40:37.458500 kernel: raid6: using avx2x2 recovery algorithm Nov 12 22:40:37.485098 kernel: xor: automatically using best checksumming function avx Nov 12 22:40:37.653102 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 22:40:37.667410 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:40:37.679252 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:40:37.694123 systemd-udevd[415]: Using default interface naming scheme 'v255'. Nov 12 22:40:37.700000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:40:37.706363 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 22:40:37.723983 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Nov 12 22:40:37.759427 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:40:37.771260 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:40:37.841456 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:40:37.855011 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 22:40:37.869225 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 22:40:37.873356 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:40:37.873518 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:40:37.873897 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:40:37.884071 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 22:40:37.897726 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 22:40:37.902384 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 22:40:37.902406 kernel: GPT:9289727 != 19775487 Nov 12 22:40:37.902416 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 22:40:37.902427 kernel: GPT:9289727 != 19775487 Nov 12 22:40:37.902437 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 22:40:37.902447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:40:37.902457 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 22:40:37.885268 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 22:40:37.895969 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:40:37.915598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:40:37.920756 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 22:40:37.920781 kernel: libata version 3.00 loaded. Nov 12 22:40:37.915721 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:40:37.917804 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:40:37.925165 kernel: AES CTR mode by8 optimization enabled Nov 12 22:40:37.919458 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:40:37.919582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:40:37.920868 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:40:37.929125 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 22:40:37.946232 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 22:40:37.946254 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 22:40:37.946472 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 22:40:37.946663 kernel: scsi host0: ahci Nov 12 22:40:37.946884 kernel: scsi host1: ahci Nov 12 22:40:37.947226 kernel: scsi host2: ahci Nov 12 22:40:37.947943 kernel: scsi host3: ahci Nov 12 22:40:37.948176 kernel: scsi host4: ahci Nov 12 22:40:37.948386 kernel: scsi host5: ahci Nov 12 22:40:37.948582 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 12 22:40:37.948600 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 12 22:40:37.948612 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 12 22:40:37.948622 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 12 22:40:37.948633 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 12 22:40:37.948646 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 12 22:40:37.929826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:40:37.936633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:40:37.936743 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:40:37.961932 kernel: BTRFS: device fsid d498af32-b44b-4318-a942-3a646ccb9d0a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (471) Nov 12 22:40:37.961956 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (460) Nov 12 22:40:37.951169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:40:37.975390 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 22:40:37.985936 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 22:40:37.995546 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 22:40:37.995770 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 22:40:38.005457 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:40:38.102752 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 22:40:38.104656 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:40:38.108224 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:40:38.114985 disk-uuid[556]: Primary Header is updated. Nov 12 22:40:38.114985 disk-uuid[556]: Secondary Entries is updated. Nov 12 22:40:38.114985 disk-uuid[556]: Secondary Header is updated. Nov 12 22:40:38.119082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:40:38.127096 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:40:38.132011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:40:38.254090 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 22:40:38.254156 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 22:40:38.255092 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 22:40:38.256075 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 22:40:38.257087 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 22:40:38.258244 kernel: ata3.00: applying bridge limits Nov 12 22:40:38.258261 kernel: ata3.00: configured for UDMA/100 Nov 12 22:40:38.259094 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 22:40:38.264080 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 22:40:38.264108 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 22:40:38.303102 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 22:40:38.316090 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 22:40:38.316113 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 22:40:39.129080 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:40:39.129595 disk-uuid[562]: The operation has completed successfully. Nov 12 22:40:39.168225 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 22:40:39.168425 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 22:40:39.207405 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 22:40:39.211006 sh[595]: Success Nov 12 22:40:39.226071 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 22:40:39.264404 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 22:40:39.280211 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 22:40:39.284410 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 22:40:39.298029 kernel: BTRFS info (device dm-0): first mount of filesystem d498af32-b44b-4318-a942-3a646ccb9d0a Nov 12 22:40:39.298120 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:40:39.298133 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 22:40:39.298144 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 22:40:39.298755 kernel: BTRFS info (device dm-0): using free space tree Nov 12 22:40:39.304505 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 22:40:39.307123 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 22:40:39.321203 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 22:40:39.323829 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 22:40:39.334759 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:40:39.334799 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:40:39.334832 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:40:39.338094 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:40:39.349848 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 22:40:39.353102 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:40:39.364795 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 22:40:39.372415 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 22:40:39.475436 ignition[691]: Ignition 2.20.0 Nov 12 22:40:39.475461 ignition[691]: Stage: fetch-offline Nov 12 22:40:39.475504 ignition[691]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:40:39.475515 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:40:39.475706 ignition[691]: parsed url from cmdline: "" Nov 12 22:40:39.475713 ignition[691]: no config URL provided Nov 12 22:40:39.475721 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 22:40:39.475736 ignition[691]: no config at "/usr/lib/ignition/user.ign" Nov 12 22:40:39.475931 ignition[691]: op(1): [started] loading QEMU firmware config module Nov 12 22:40:39.475939 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 22:40:39.487723 ignition[691]: op(1): [finished] loading QEMU firmware config module Nov 12 22:40:39.502509 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:40:39.516230 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:40:39.533131 ignition[691]: parsing config with SHA512: fb9e47f3b96c93c0fdb1c3fdfb6cd81e648eb6d5467655687fe727f374b343b98f367364200793afc38642dceb3bbb06f5c3b35cec33e95f0068ca83815663f8 Nov 12 22:40:39.541701 systemd-networkd[783]: lo: Link UP Nov 12 22:40:39.541714 systemd-networkd[783]: lo: Gained carrier Nov 12 22:40:39.543925 systemd-networkd[783]: Enumeration completed Nov 12 22:40:39.544136 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:40:39.544517 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:40:39.544522 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:40:39.549185 ignition[691]: fetch-offline: fetch-offline passed Nov 12 22:40:39.546544 systemd[1]: Reached target network.target - Network. Nov 12 22:40:39.549292 ignition[691]: Ignition finished successfully Nov 12 22:40:39.546933 systemd-networkd[783]: eth0: Link UP Nov 12 22:40:39.546939 systemd-networkd[783]: eth0: Gained carrier Nov 12 22:40:39.546951 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:40:39.548727 unknown[691]: fetched base config from "system" Nov 12 22:40:39.548736 unknown[691]: fetched user config from "qemu" Nov 12 22:40:39.551911 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:40:39.553724 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 22:40:39.572181 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:40:39.572349 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 22:40:39.616856 ignition[786]: Ignition 2.20.0 Nov 12 22:40:39.616870 ignition[786]: Stage: kargs Nov 12 22:40:39.617112 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:40:39.617125 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:40:39.621031 ignition[786]: kargs: kargs passed Nov 12 22:40:39.621097 ignition[786]: Ignition finished successfully Nov 12 22:40:39.625294 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 22:40:39.642320 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 22:40:39.661270 ignition[796]: Ignition 2.20.0 Nov 12 22:40:39.661282 ignition[796]: Stage: disks Nov 12 22:40:39.661454 ignition[796]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:40:39.661465 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:40:39.665184 ignition[796]: disks: disks passed Nov 12 22:40:39.665237 ignition[796]: Ignition finished successfully Nov 12 22:40:39.668976 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 22:40:39.670490 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 22:40:39.672309 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 22:40:39.673441 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:40:39.673764 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:40:39.674283 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:40:39.689335 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 22:40:39.702041 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 22:40:39.710415 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 22:40:39.719293 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 22:40:39.887089 kernel: EXT4-fs (vda9): mounted filesystem 62325592-ead9-4e81-b706-99baa0cf9fff r/w with ordered data mode. Quota mode: none. Nov 12 22:40:39.887839 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 22:40:39.888785 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 22:40:39.904306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:40:39.906668 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 22:40:39.907016 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 22:40:39.907075 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 22:40:39.916894 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Nov 12 22:40:39.916921 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:40:39.907100 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:40:39.921306 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:40:39.921349 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:40:39.923093 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:40:39.924775 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:40:39.952616 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 22:40:39.953973 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 22:40:40.039509 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 22:40:40.044406 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Nov 12 22:40:40.050694 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 22:40:40.055447 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 22:40:40.167851 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 22:40:40.183143 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 22:40:40.185365 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 22:40:40.192094 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:40:40.211318 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 22:40:40.223584 ignition[927]: INFO : Ignition 2.20.0 Nov 12 22:40:40.223584 ignition[927]: INFO : Stage: mount Nov 12 22:40:40.225857 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:40:40.225857 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:40:40.225857 ignition[927]: INFO : mount: mount passed Nov 12 22:40:40.225857 ignition[927]: INFO : Ignition finished successfully Nov 12 22:40:40.228537 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 22:40:40.254381 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 22:40:40.296403 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 22:40:40.317349 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:40:40.325870 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Nov 12 22:40:40.325912 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:40:40.325946 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:40:40.327396 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:40:40.330091 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:40:40.331764 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:40:40.384266 ignition[958]: INFO : Ignition 2.20.0 Nov 12 22:40:40.384266 ignition[958]: INFO : Stage: files Nov 12 22:40:40.386445 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:40:40.386445 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:40:40.390195 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Nov 12 22:40:40.392403 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 22:40:40.392403 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 22:40:40.398018 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 22:40:40.399976 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 22:40:40.402274 unknown[958]: wrote ssh authorized keys file for user: core Nov 12 22:40:40.403649 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 22:40:40.406927 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:40:40.409295 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 22:40:40.468292 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 22:40:40.584013 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:40:40.584013 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:40:40.587994 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 12 22:40:40.638416 systemd-networkd[783]: eth0: Gained IPv6LL Nov 12 22:40:41.105458 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 22:40:41.217586 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:40:41.217586 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:40:41.222748 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 22:40:41.522087 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 22:40:42.089191 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:40:42.089191 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 22:40:42.092992 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:40:42.092992 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:40:42.092992 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 22:40:42.092992 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 12 22:40:42.092992 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:40:42.092992 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:40:42.092992 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 12 22:40:42.092992 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 22:40:42.121108 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:40:42.148386 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:40:42.150200 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 22:40:42.150200 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 12 22:40:42.153075 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 22:40:42.154529 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:40:42.156339 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:40:42.158043 ignition[958]: INFO : files: files passed Nov 12 22:40:42.158824 ignition[958]: INFO : Ignition finished successfully Nov 12 22:40:42.162472 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 22:40:42.171261 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 22:40:42.174488 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 22:40:42.176898 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 22:40:42.177070 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 22:40:42.185422 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 22:40:42.188821 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:40:42.190506 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:40:42.193305 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:40:42.191264 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:40:42.194257 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 22:40:42.203286 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 22:40:42.227537 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 22:40:42.227714 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 22:40:42.230271 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 22:40:42.232444 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 22:40:42.234631 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 22:40:42.247292 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 22:40:42.262611 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:40:42.265417 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 22:40:42.280008 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:40:42.281390 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:40:42.283739 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 22:40:42.285821 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 22:40:42.285962 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:40:42.288375 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 22:40:42.290043 systemd[1]: Stopped target basic.target - Basic System. Nov 12 22:40:42.292185 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 22:40:42.294304 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:40:42.296459 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 22:40:42.298709 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 22:40:42.300959 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:40:42.303384 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 22:40:42.305531 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 22:40:42.307766 systemd[1]: Stopped target swap.target - Swaps. Nov 12 22:40:42.309592 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 22:40:42.309761 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:40:42.312156 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:40:42.313687 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:40:42.315890 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 22:40:42.316074 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:40:42.318181 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 22:40:42.318318 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 22:40:42.320765 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 22:40:42.320897 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:40:42.322841 systemd[1]: Stopped target paths.target - Path Units. Nov 12 22:40:42.324640 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 22:40:42.328114 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:40:42.329551 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 22:40:42.331498 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 22:40:42.333640 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 22:40:42.333773 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:40:42.335536 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 22:40:42.335656 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:40:42.337618 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 22:40:42.337769 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:40:42.340369 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 22:40:42.340500 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 22:40:42.353227 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 22:40:42.355176 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 22:40:42.355324 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:40:42.358480 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 22:40:42.359621 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 22:40:42.360000 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:40:42.362066 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 22:40:42.362340 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:40:42.369139 ignition[1012]: INFO : Ignition 2.20.0 Nov 12 22:40:42.369139 ignition[1012]: INFO : Stage: umount Nov 12 22:40:42.370197 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 22:40:42.373277 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:40:42.373277 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:40:42.373277 ignition[1012]: INFO : umount: umount passed Nov 12 22:40:42.373277 ignition[1012]: INFO : Ignition finished successfully Nov 12 22:40:42.370369 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 22:40:42.374335 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 22:40:42.374504 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 22:40:42.375794 systemd[1]: Stopped target network.target - Network. Nov 12 22:40:42.377273 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 22:40:42.377337 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 22:40:42.379041 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 22:40:42.379118 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 22:40:42.379545 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 22:40:42.379600 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 22:40:42.379892 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 22:40:42.379948 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 22:40:42.380564 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 22:40:42.386523 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 22:40:42.393208 systemd-networkd[783]: eth0: DHCPv6 lease lost Nov 12 22:40:42.393525 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 22:40:42.393711 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 22:40:42.397679 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 22:40:42.397868 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 22:40:42.399964 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 22:40:42.400088 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:40:42.410272 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 22:40:42.411273 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 22:40:42.411347 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:40:42.412881 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:40:42.412957 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:40:42.415148 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 22:40:42.415210 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 22:40:42.416448 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 22:40:42.416557 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:40:42.419026 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:40:42.431789 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 22:40:42.434725 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 22:40:42.434936 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 22:40:42.448211 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 22:40:42.448425 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:40:42.450917 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 22:40:42.450979 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 22:40:42.453043 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 22:40:42.453106 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:40:42.455125 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 22:40:42.455178 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:40:42.457421 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 22:40:42.457474 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 22:40:42.459620 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:40:42.459679 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:40:42.474467 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 22:40:42.477150 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 22:40:42.477261 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:40:42.479709 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:40:42.479779 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:40:42.484601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 22:40:42.485720 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 22:40:42.655435 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 22:40:42.656588 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 22:40:42.658832 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 22:40:42.660957 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 22:40:42.661018 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 22:40:42.673234 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 22:40:42.683089 systemd[1]: Switching root. Nov 12 22:40:42.717376 systemd-journald[195]: Journal stopped Nov 12 22:40:43.985975 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Nov 12 22:40:43.986047 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 22:40:43.996068 kernel: SELinux: policy capability open_perms=1 Nov 12 22:40:43.996088 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 22:40:43.996126 kernel: SELinux: policy capability always_check_network=0 Nov 12 22:40:43.996142 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 22:40:43.996158 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 22:40:43.996174 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 22:40:43.996191 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 22:40:43.996209 kernel: audit: type=1403 audit(1731451243.194:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 22:40:43.996228 systemd[1]: Successfully loaded SELinux policy in 50.225ms. Nov 12 22:40:43.996267 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.941ms. Nov 12 22:40:43.996296 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:40:43.996314 systemd[1]: Detected virtualization kvm. Nov 12 22:40:43.996332 systemd[1]: Detected architecture x86-64. Nov 12 22:40:43.996349 systemd[1]: Detected first boot. Nov 12 22:40:43.996366 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:40:43.996383 zram_generator::config[1056]: No configuration found. Nov 12 22:40:43.996402 systemd[1]: Populated /etc with preset unit settings. Nov 12 22:40:43.996420 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 22:40:43.996439 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 22:40:43.996459 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 22:40:43.996477 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 22:40:43.996494 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 22:40:43.996511 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 22:40:43.996529 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 22:40:43.996547 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 22:40:43.996564 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 22:40:43.996591 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 22:40:43.996612 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 22:40:43.996629 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:40:43.996647 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:40:43.996665 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 22:40:43.996682 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 22:40:43.996708 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 22:40:43.996727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:40:43.996745 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 22:40:43.996762 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:40:43.996783 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 22:40:43.996799 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 22:40:43.996816 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 22:40:43.996833 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 22:40:43.996850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:40:43.996868 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:40:43.996885 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:40:43.996902 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:40:43.996931 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 22:40:43.996948 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 22:40:43.996964 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:40:43.996980 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:40:43.996999 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:40:43.997016 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 22:40:43.997032 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 22:40:43.997048 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 22:40:43.997080 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 22:40:43.997102 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:40:43.997119 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 22:40:43.997135 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 22:40:43.997153 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 22:40:43.997171 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 22:40:43.997189 systemd[1]: Reached target machines.target - Containers. Nov 12 22:40:43.997206 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 22:40:43.997227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:40:43.997250 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:40:43.997268 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 22:40:43.997287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:40:43.997304 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:40:43.997322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:40:43.997340 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 22:40:43.997357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:40:43.997376 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 22:40:43.997394 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 22:40:43.997416 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 22:40:43.997433 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 22:40:43.997455 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 22:40:43.997472 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:40:43.997489 kernel: fuse: init (API version 7.39) Nov 12 22:40:43.997505 kernel: loop: module loaded Nov 12 22:40:43.997522 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:40:43.997539 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 22:40:43.997565 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 22:40:43.997624 systemd-journald[1126]: Collecting audit messages is disabled. Nov 12 22:40:43.997657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:40:43.997679 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 22:40:43.997696 systemd[1]: Stopped verity-setup.service. Nov 12 22:40:43.997724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:40:43.997741 kernel: ACPI: bus type drm_connector registered Nov 12 22:40:43.997768 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 22:40:43.997787 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 22:40:43.997804 systemd-journald[1126]: Journal started Nov 12 22:40:43.997834 systemd-journald[1126]: Runtime Journal (/run/log/journal/d9bde27576e54cb4af705a06ae44760b) is 6.0M, max 48.3M, 42.2M free. Nov 12 22:40:43.767989 systemd[1]: Queued start job for default target multi-user.target. Nov 12 22:40:43.786418 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 22:40:43.786914 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 22:40:44.001292 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:40:44.002872 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 22:40:44.004102 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 22:40:44.005386 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 22:40:44.006709 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 22:40:44.008033 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 22:40:44.009594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:40:44.011234 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 22:40:44.011420 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 22:40:44.013105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:40:44.013287 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:40:44.014789 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:40:44.014974 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:40:44.016571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:40:44.016757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:40:44.018348 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 22:40:44.018528 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 22:40:44.019971 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:40:44.020174 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:40:44.021793 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:40:44.023277 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 22:40:44.024878 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 22:40:44.040065 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 22:40:44.054175 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 22:40:44.056854 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 22:40:44.058040 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 22:40:44.058089 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:40:44.060517 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 22:40:44.063454 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 22:40:44.066507 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 22:40:44.067840 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:40:44.071504 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 22:40:44.087368 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 22:40:44.088788 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:40:44.093463 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 22:40:44.094925 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:40:44.097571 systemd-journald[1126]: Time spent on flushing to /var/log/journal/d9bde27576e54cb4af705a06ae44760b is 14.744ms for 1042 entries. Nov 12 22:40:44.097571 systemd-journald[1126]: System Journal (/var/log/journal/d9bde27576e54cb4af705a06ae44760b) is 8.0M, max 195.6M, 187.6M free. Nov 12 22:40:44.120927 systemd-journald[1126]: Received client request to flush runtime journal. Nov 12 22:40:44.098863 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:40:44.101323 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 22:40:44.108272 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 22:40:44.112071 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:40:44.113975 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 22:40:44.117496 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 22:40:44.119721 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 22:40:44.124724 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 22:40:44.127142 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 22:40:44.134336 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 22:40:44.138753 kernel: loop0: detected capacity change from 0 to 140992 Nov 12 22:40:44.145732 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 22:40:44.152204 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 22:40:44.153957 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:40:44.166095 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 22:40:44.170655 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 22:40:44.178256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:40:44.179195 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 22:40:44.179795 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 22:40:44.184247 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 22:40:44.198081 kernel: loop1: detected capacity change from 0 to 138184 Nov 12 22:40:44.201715 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Nov 12 22:40:44.201741 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Nov 12 22:40:44.209902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:40:44.234118 kernel: loop2: detected capacity change from 0 to 211296 Nov 12 22:40:44.261087 kernel: loop3: detected capacity change from 0 to 140992 Nov 12 22:40:44.274085 kernel: loop4: detected capacity change from 0 to 138184 Nov 12 22:40:44.290091 kernel: loop5: detected capacity change from 0 to 211296 Nov 12 22:40:44.298132 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 22:40:44.298742 (sd-merge)[1195]: Merged extensions into '/usr'. Nov 12 22:40:44.305508 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 22:40:44.305532 systemd[1]: Reloading... Nov 12 22:40:44.376084 zram_generator::config[1222]: No configuration found. Nov 12 22:40:44.397768 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 22:40:44.501853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:40:44.552561 systemd[1]: Reloading finished in 246 ms. Nov 12 22:40:44.586741 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 22:40:44.588625 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 22:40:44.601192 systemd[1]: Starting ensure-sysext.service... Nov 12 22:40:44.603494 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:40:44.609756 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Nov 12 22:40:44.609774 systemd[1]: Reloading... Nov 12 22:40:44.628744 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 22:40:44.629175 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 22:40:44.630221 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 22:40:44.630527 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Nov 12 22:40:44.630610 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Nov 12 22:40:44.634137 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:40:44.634313 systemd-tmpfiles[1259]: Skipping /boot Nov 12 22:40:44.647634 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:40:44.647793 systemd-tmpfiles[1259]: Skipping /boot Nov 12 22:40:44.671089 zram_generator::config[1286]: No configuration found. Nov 12 22:40:44.784129 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:40:44.836277 systemd[1]: Reloading finished in 226 ms. Nov 12 22:40:44.855596 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 22:40:44.869997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:40:44.880882 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:40:44.884329 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 22:40:44.887606 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 22:40:44.892086 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:40:44.896239 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:40:44.902440 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 22:40:44.907739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:40:44.907925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:40:44.909293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:40:44.913647 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:40:44.915985 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:40:44.917258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:40:44.920282 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 22:40:44.921362 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:40:44.922366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:40:44.922549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:40:44.924293 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:40:44.924467 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:40:44.929778 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:40:44.929978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:40:44.934156 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 22:40:44.939630 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Nov 12 22:40:44.943219 systemd[1]: Finished ensure-sysext.service. Nov 12 22:40:44.944764 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 22:40:44.950588 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:40:44.950762 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:40:44.960276 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:40:44.963305 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:40:44.965573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:40:44.968460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:40:44.970822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:40:44.972838 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 22:40:44.976928 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 22:40:44.978174 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:40:44.978483 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:40:44.980601 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:40:44.982137 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:40:44.983526 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 22:40:44.985226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:40:44.985674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:40:44.988216 augenrules[1375]: No rules Nov 12 22:40:44.988725 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:40:44.988904 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:40:44.994465 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:40:44.995117 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:40:44.997673 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 22:40:45.000394 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:40:45.000581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:40:45.017512 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 22:40:45.025264 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:40:45.026386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:40:45.026461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:40:45.026484 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 22:40:45.047095 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1370) Nov 12 22:40:45.049071 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1370) Nov 12 22:40:45.075856 systemd-resolved[1329]: Positive Trust Anchors: Nov 12 22:40:45.076214 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:40:45.076251 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:40:45.080969 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 22:40:45.082135 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1370) Nov 12 22:40:45.084834 systemd-resolved[1329]: Defaulting to hostname 'linux'. Nov 12 22:40:45.090131 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:40:45.093522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:40:45.111033 systemd-networkd[1403]: lo: Link UP Nov 12 22:40:45.111046 systemd-networkd[1403]: lo: Gained carrier Nov 12 22:40:45.113429 systemd-networkd[1403]: Enumeration completed Nov 12 22:40:45.113866 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:40:45.113870 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:40:45.114777 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:40:45.115378 systemd-networkd[1403]: eth0: Link UP Nov 12 22:40:45.115395 systemd-networkd[1403]: eth0: Gained carrier Nov 12 22:40:45.115409 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:40:45.118005 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:40:45.119379 systemd[1]: Reached target network.target - Network. Nov 12 22:40:45.125244 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 12 22:40:45.125190 systemd-networkd[1403]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:40:45.126259 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 22:40:45.126368 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection. Nov 12 22:40:46.441235 systemd-resolved[1329]: Clock change detected. Flushing caches. Nov 12 22:40:46.441382 systemd-timesyncd[1368]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 22:40:46.441430 systemd-timesyncd[1368]: Initial clock synchronization to Tue 2024-11-12 22:40:46.441194 UTC. Nov 12 22:40:46.446493 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 22:40:46.448083 kernel: ACPI: button: Power Button [PWRF] Nov 12 22:40:46.449638 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 22:40:46.451749 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 22:40:46.460882 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 22:40:46.468369 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:40:46.471464 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 12 22:40:46.475425 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 12 22:40:46.491816 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 22:40:46.492025 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 22:40:46.492580 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 22:40:46.506940 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 22:40:46.505216 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:40:46.567106 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:40:46.619440 kernel: kvm_amd: TSC scaling supported Nov 12 22:40:46.619508 kernel: kvm_amd: Nested Virtualization enabled Nov 12 22:40:46.619523 kernel: kvm_amd: Nested Paging enabled Nov 12 22:40:46.620407 kernel: kvm_amd: LBR virtualization supported Nov 12 22:40:46.620424 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 22:40:46.621433 kernel: kvm_amd: Virtual GIF supported Nov 12 22:40:46.642433 kernel: EDAC MC: Ver: 3.0.0 Nov 12 22:40:46.679784 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 22:40:46.696572 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 22:40:46.705156 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:40:46.733630 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 22:40:46.735317 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:40:46.736602 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:40:46.737979 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 22:40:46.739468 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 22:40:46.741151 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 22:40:46.742566 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 22:40:46.743938 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 22:40:46.745384 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 22:40:46.745417 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:40:46.746484 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:40:46.748530 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 22:40:46.751516 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 22:40:46.759864 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 22:40:46.762433 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 22:40:46.764208 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 22:40:46.765572 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:40:46.766835 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:40:46.767856 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:40:46.767892 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:40:46.769184 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 22:40:46.771706 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 22:40:46.775668 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 22:40:46.776225 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:40:46.779504 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 22:40:46.780570 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 22:40:46.783015 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 22:40:46.786455 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 22:40:46.786940 jq[1437]: false Nov 12 22:40:46.791598 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 22:40:46.795559 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 22:40:46.801288 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 22:40:46.803733 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 22:40:46.804379 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 22:40:46.805678 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 22:40:46.809521 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 22:40:46.812218 extend-filesystems[1438]: Found loop3 Nov 12 22:40:46.813434 extend-filesystems[1438]: Found loop4 Nov 12 22:40:46.813434 extend-filesystems[1438]: Found loop5 Nov 12 22:40:46.813434 extend-filesystems[1438]: Found sr0 Nov 12 22:40:46.813434 extend-filesystems[1438]: Found vda Nov 12 22:40:46.813434 extend-filesystems[1438]: Found vda1 Nov 12 22:40:46.813434 extend-filesystems[1438]: Found vda2 Nov 12 22:40:46.813434 extend-filesystems[1438]: Found vda3 Nov 12 22:40:46.813434 extend-filesystems[1438]: Found usr Nov 12 22:40:46.813434 extend-filesystems[1438]: Found vda4 Nov 12 22:40:46.813434 extend-filesystems[1438]: Found vda6 Nov 12 22:40:46.813434 extend-filesystems[1438]: Found vda7 Nov 12 22:40:46.813434 extend-filesystems[1438]: Found vda9 Nov 12 22:40:46.813434 extend-filesystems[1438]: Checking size of /dev/vda9 Nov 12 22:40:46.818175 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 22:40:46.821836 jq[1449]: true Nov 12 22:40:46.827886 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 22:40:46.828146 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 22:40:46.829553 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 22:40:46.829790 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 22:40:46.832949 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 22:40:46.833210 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 22:40:46.838758 dbus-daemon[1436]: [system] SELinux support is enabled Nov 12 22:40:46.839856 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 22:40:46.845141 update_engine[1447]: I20241112 22:40:46.845062 1447 main.cc:92] Flatcar Update Engine starting Nov 12 22:40:46.850134 extend-filesystems[1438]: Resized partition /dev/vda9 Nov 12 22:40:46.855726 update_engine[1447]: I20241112 22:40:46.855669 1447 update_check_scheduler.cc:74] Next update check in 7m39s Nov 12 22:40:46.856354 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Nov 12 22:40:46.860079 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 22:40:46.860115 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 22:40:46.862364 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 22:40:46.862387 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 22:40:46.866189 systemd[1]: Started update-engine.service - Update Engine. Nov 12 22:40:46.866376 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 22:40:46.871427 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1363) Nov 12 22:40:46.871480 jq[1459]: true Nov 12 22:40:46.872549 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 22:40:46.885483 tar[1458]: linux-amd64/helm Nov 12 22:40:46.893778 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 22:40:46.895561 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 22:40:46.895603 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 22:40:46.897869 systemd-logind[1445]: New seat seat0. Nov 12 22:40:46.900310 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 22:40:46.916202 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 22:40:46.946273 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 22:40:46.946273 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 22:40:46.946273 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 22:40:46.950550 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Nov 12 22:40:46.953675 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 22:40:46.954071 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 22:40:46.958688 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 22:40:46.960879 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Nov 12 22:40:46.962304 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 22:40:46.965935 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 22:40:47.108017 containerd[1472]: time="2024-11-12T22:40:47.107841376Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 12 22:40:47.135400 containerd[1472]: time="2024-11-12T22:40:47.135326360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:40:47.137205 containerd[1472]: time="2024-11-12T22:40:47.137169177Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:40:47.137205 containerd[1472]: time="2024-11-12T22:40:47.137193443Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 22:40:47.137308 containerd[1472]: time="2024-11-12T22:40:47.137208681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 22:40:47.137434 containerd[1472]: time="2024-11-12T22:40:47.137405310Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 22:40:47.137434 containerd[1472]: time="2024-11-12T22:40:47.137424987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 22:40:47.137510 containerd[1472]: time="2024-11-12T22:40:47.137497252Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:40:47.137544 containerd[1472]: time="2024-11-12T22:40:47.137512772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:40:47.137739 containerd[1472]: time="2024-11-12T22:40:47.137709541Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:40:47.137739 containerd[1472]: time="2024-11-12T22:40:47.137726933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 22:40:47.137799 containerd[1472]: time="2024-11-12T22:40:47.137738976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:40:47.137799 containerd[1472]: time="2024-11-12T22:40:47.137747883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 22:40:47.137884 containerd[1472]: time="2024-11-12T22:40:47.137862388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:40:47.138136 containerd[1472]: time="2024-11-12T22:40:47.138106035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:40:47.138247 containerd[1472]: time="2024-11-12T22:40:47.138225348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:40:47.138247 containerd[1472]: time="2024-11-12T22:40:47.138239755Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 22:40:47.138374 containerd[1472]: time="2024-11-12T22:40:47.138352186Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 22:40:47.138432 containerd[1472]: time="2024-11-12T22:40:47.138412800Z" level=info msg="metadata content store policy set" policy=shared Nov 12 22:40:47.180657 containerd[1472]: time="2024-11-12T22:40:47.180625885Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 22:40:47.180709 containerd[1472]: time="2024-11-12T22:40:47.180666671Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 22:40:47.180709 containerd[1472]: time="2024-11-12T22:40:47.180685847Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 22:40:47.180709 containerd[1472]: time="2024-11-12T22:40:47.180699824Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 22:40:47.180812 containerd[1472]: time="2024-11-12T22:40:47.180712487Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 22:40:47.180898 containerd[1472]: time="2024-11-12T22:40:47.180841099Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 22:40:47.181111 containerd[1472]: time="2024-11-12T22:40:47.181053036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 22:40:47.181219 containerd[1472]: time="2024-11-12T22:40:47.181159546Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 22:40:47.181219 containerd[1472]: time="2024-11-12T22:40:47.181177690Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 22:40:47.181219 containerd[1472]: time="2024-11-12T22:40:47.181190294Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 22:40:47.181219 containerd[1472]: time="2024-11-12T22:40:47.181202527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 22:40:47.181219 containerd[1472]: time="2024-11-12T22:40:47.181214199Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181225790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181238184Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181250647Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181264984Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181276736Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181287055Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181304548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181317252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181329104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181385320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181398975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181405 containerd[1472]: time="2024-11-12T22:40:47.181410868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181421908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181433260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181461573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181475679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181486660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181498041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181509603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181539288Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181557022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181575857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181588841Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181652962Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181667239Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 22:40:47.181833 containerd[1472]: time="2024-11-12T22:40:47.181676686Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 22:40:47.182232 containerd[1472]: time="2024-11-12T22:40:47.181703346Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 22:40:47.182232 containerd[1472]: time="2024-11-12T22:40:47.181712684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.182232 containerd[1472]: time="2024-11-12T22:40:47.181726620Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 22:40:47.182232 containerd[1472]: time="2024-11-12T22:40:47.181742720Z" level=info msg="NRI interface is disabled by configuration." Nov 12 22:40:47.182232 containerd[1472]: time="2024-11-12T22:40:47.181775542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 22:40:47.182392 containerd[1472]: time="2024-11-12T22:40:47.182093438Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 22:40:47.182392 containerd[1472]: time="2024-11-12T22:40:47.182133784Z" level=info msg="Connect containerd service" Nov 12 22:40:47.182392 containerd[1472]: time="2024-11-12T22:40:47.182176634Z" level=info msg="using legacy CRI server" Nov 12 22:40:47.182392 containerd[1472]: time="2024-11-12T22:40:47.182185260Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 22:40:47.182392 containerd[1472]: time="2024-11-12T22:40:47.182306157Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 22:40:47.183123 containerd[1472]: time="2024-11-12T22:40:47.183090939Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:40:47.185223 containerd[1472]: time="2024-11-12T22:40:47.183305412Z" level=info msg="Start subscribing containerd event" Nov 12 22:40:47.185223 containerd[1472]: time="2024-11-12T22:40:47.185212680Z" level=info msg="Start recovering state" Nov 12 22:40:47.185317 containerd[1472]: time="2024-11-12T22:40:47.185295786Z" level=info msg="Start event monitor" Nov 12 22:40:47.185365 containerd[1472]: time="2024-11-12T22:40:47.185325472Z" level=info msg="Start snapshots syncer" Nov 12 22:40:47.185365 containerd[1472]: time="2024-11-12T22:40:47.185334108Z" level=info msg="Start cni network conf syncer for default" Nov 12 22:40:47.185423 containerd[1472]: time="2024-11-12T22:40:47.185362942Z" level=info msg="Start streaming server" Nov 12 22:40:47.185651 containerd[1472]: time="2024-11-12T22:40:47.183841177Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 22:40:47.185651 containerd[1472]: time="2024-11-12T22:40:47.185550844Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 22:40:47.185651 containerd[1472]: time="2024-11-12T22:40:47.185622900Z" level=info msg="containerd successfully booted in 0.079838s" Nov 12 22:40:47.186198 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 22:40:47.245555 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 22:40:47.271235 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 22:40:47.287644 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 22:40:47.295078 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 22:40:47.295370 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 22:40:47.302685 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 22:40:47.314250 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 22:40:47.315812 tar[1458]: linux-amd64/LICENSE Nov 12 22:40:47.315812 tar[1458]: linux-amd64/README.md Nov 12 22:40:47.330388 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 22:40:47.333204 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 22:40:47.334885 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 22:40:47.336815 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 22:40:48.351498 systemd-networkd[1403]: eth0: Gained IPv6LL Nov 12 22:40:48.355168 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 22:40:48.357304 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 22:40:48.369642 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 22:40:48.372178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:40:48.375022 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 22:40:48.396365 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 22:40:48.396697 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 22:40:48.398819 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 22:40:48.403157 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 22:40:49.439954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:40:49.441649 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 22:40:49.443036 systemd[1]: Startup finished in 1.525s (kernel) + 6.410s (initrd) + 4.984s (userspace) = 12.919s. Nov 12 22:40:49.445371 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:40:50.178190 kubelet[1550]: E1112 22:40:50.178002 1550 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:40:50.183400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:40:50.183673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:40:50.184052 systemd[1]: kubelet.service: Consumed 1.682s CPU time. Nov 12 22:40:52.433502 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 22:40:52.434946 systemd[1]: Started sshd@0-10.0.0.30:22-10.0.0.1:50886.service - OpenSSH per-connection server daemon (10.0.0.1:50886). Nov 12 22:40:52.497054 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 50886 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:40:52.499136 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:40:52.508855 systemd-logind[1445]: New session 1 of user core. Nov 12 22:40:52.510216 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 22:40:52.522561 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 22:40:52.534103 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 22:40:52.544598 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 22:40:52.547788 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 22:40:52.666671 systemd[1568]: Queued start job for default target default.target. Nov 12 22:40:52.675798 systemd[1568]: Created slice app.slice - User Application Slice. Nov 12 22:40:52.675841 systemd[1568]: Reached target paths.target - Paths. Nov 12 22:40:52.675862 systemd[1568]: Reached target timers.target - Timers. Nov 12 22:40:52.677606 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 22:40:52.689251 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 22:40:52.689452 systemd[1568]: Reached target sockets.target - Sockets. Nov 12 22:40:52.689476 systemd[1568]: Reached target basic.target - Basic System. Nov 12 22:40:52.689527 systemd[1568]: Reached target default.target - Main User Target. Nov 12 22:40:52.689573 systemd[1568]: Startup finished in 134ms. Nov 12 22:40:52.689895 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 22:40:52.691704 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 22:40:52.761795 systemd[1]: Started sshd@1-10.0.0.30:22-10.0.0.1:50894.service - OpenSSH per-connection server daemon (10.0.0.1:50894). Nov 12 22:40:52.803320 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 50894 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:40:52.804789 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:40:52.808471 systemd-logind[1445]: New session 2 of user core. Nov 12 22:40:52.820465 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 22:40:52.873431 sshd[1581]: Connection closed by 10.0.0.1 port 50894 Nov 12 22:40:52.873828 sshd-session[1579]: pam_unix(sshd:session): session closed for user core Nov 12 22:40:52.884060 systemd[1]: sshd@1-10.0.0.30:22-10.0.0.1:50894.service: Deactivated successfully. Nov 12 22:40:52.885922 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 22:40:52.887335 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Nov 12 22:40:52.888659 systemd[1]: Started sshd@2-10.0.0.30:22-10.0.0.1:50908.service - OpenSSH per-connection server daemon (10.0.0.1:50908). Nov 12 22:40:52.889465 systemd-logind[1445]: Removed session 2. Nov 12 22:40:52.928489 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 50908 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:40:52.929825 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:40:52.933845 systemd-logind[1445]: New session 3 of user core. Nov 12 22:40:52.943472 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 22:40:52.992151 sshd[1588]: Connection closed by 10.0.0.1 port 50908 Nov 12 22:40:52.992523 sshd-session[1586]: pam_unix(sshd:session): session closed for user core Nov 12 22:40:53.005207 systemd[1]: sshd@2-10.0.0.30:22-10.0.0.1:50908.service: Deactivated successfully. Nov 12 22:40:53.006993 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 22:40:53.008431 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Nov 12 22:40:53.022590 systemd[1]: Started sshd@3-10.0.0.30:22-10.0.0.1:50910.service - OpenSSH per-connection server daemon (10.0.0.1:50910). Nov 12 22:40:53.023495 systemd-logind[1445]: Removed session 3. Nov 12 22:40:53.057768 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 50910 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:40:53.059185 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:40:53.063068 systemd-logind[1445]: New session 4 of user core. Nov 12 22:40:53.072469 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 22:40:53.125819 sshd[1595]: Connection closed by 10.0.0.1 port 50910 Nov 12 22:40:53.126157 sshd-session[1593]: pam_unix(sshd:session): session closed for user core Nov 12 22:40:53.137082 systemd[1]: sshd@3-10.0.0.30:22-10.0.0.1:50910.service: Deactivated successfully. Nov 12 22:40:53.138854 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 22:40:53.140248 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Nov 12 22:40:53.149654 systemd[1]: Started sshd@4-10.0.0.30:22-10.0.0.1:50912.service - OpenSSH per-connection server daemon (10.0.0.1:50912). Nov 12 22:40:53.150678 systemd-logind[1445]: Removed session 4. Nov 12 22:40:53.184266 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 50912 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:40:53.185641 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:40:53.189673 systemd-logind[1445]: New session 5 of user core. Nov 12 22:40:53.200469 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 22:40:53.257933 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 22:40:53.258268 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:40:53.275412 sudo[1603]: pam_unix(sudo:session): session closed for user root Nov 12 22:40:53.277000 sshd[1602]: Connection closed by 10.0.0.1 port 50912 Nov 12 22:40:53.277481 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Nov 12 22:40:53.291118 systemd[1]: sshd@4-10.0.0.30:22-10.0.0.1:50912.service: Deactivated successfully. Nov 12 22:40:53.292997 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 22:40:53.294760 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Nov 12 22:40:53.296071 systemd[1]: Started sshd@5-10.0.0.30:22-10.0.0.1:50926.service - OpenSSH per-connection server daemon (10.0.0.1:50926). Nov 12 22:40:53.296919 systemd-logind[1445]: Removed session 5. Nov 12 22:40:53.337628 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 50926 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:40:53.339333 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:40:53.343902 systemd-logind[1445]: New session 6 of user core. Nov 12 22:40:53.353482 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 22:40:53.409149 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 22:40:53.409513 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:40:53.413930 sudo[1612]: pam_unix(sudo:session): session closed for user root Nov 12 22:40:53.422038 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 12 22:40:53.422427 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:40:53.444691 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:40:53.479707 augenrules[1634]: No rules Nov 12 22:40:53.480697 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:40:53.480997 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:40:53.482489 sudo[1611]: pam_unix(sudo:session): session closed for user root Nov 12 22:40:53.484176 sshd[1610]: Connection closed by 10.0.0.1 port 50926 Nov 12 22:40:53.484606 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Nov 12 22:40:53.495525 systemd[1]: sshd@5-10.0.0.30:22-10.0.0.1:50926.service: Deactivated successfully. Nov 12 22:40:53.497767 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 22:40:53.499753 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Nov 12 22:40:53.511870 systemd[1]: Started sshd@6-10.0.0.30:22-10.0.0.1:50938.service - OpenSSH per-connection server daemon (10.0.0.1:50938). Nov 12 22:40:53.513117 systemd-logind[1445]: Removed session 6. Nov 12 22:40:53.549052 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 50938 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:40:53.551067 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:40:53.556122 systemd-logind[1445]: New session 7 of user core. Nov 12 22:40:53.566555 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 22:40:53.622273 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 22:40:53.622654 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:40:54.218600 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 22:40:54.218803 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 22:40:54.817963 dockerd[1665]: time="2024-11-12T22:40:54.817871242Z" level=info msg="Starting up" Nov 12 22:40:55.176665 systemd[1]: var-lib-docker-metacopy\x2dcheck3370105238-merged.mount: Deactivated successfully. Nov 12 22:40:55.202672 dockerd[1665]: time="2024-11-12T22:40:55.202611773Z" level=info msg="Loading containers: start." Nov 12 22:40:55.382373 kernel: Initializing XFRM netlink socket Nov 12 22:40:55.470334 systemd-networkd[1403]: docker0: Link UP Nov 12 22:40:55.586860 dockerd[1665]: time="2024-11-12T22:40:55.586786452Z" level=info msg="Loading containers: done." Nov 12 22:40:55.603092 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2329752500-merged.mount: Deactivated successfully. Nov 12 22:40:55.604650 dockerd[1665]: time="2024-11-12T22:40:55.604592518Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 22:40:55.604804 dockerd[1665]: time="2024-11-12T22:40:55.604699588Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Nov 12 22:40:55.604857 dockerd[1665]: time="2024-11-12T22:40:55.604833179Z" level=info msg="Daemon has completed initialization" Nov 12 22:40:55.649027 dockerd[1665]: time="2024-11-12T22:40:55.648946519Z" level=info msg="API listen on /run/docker.sock" Nov 12 22:40:55.649253 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 22:40:56.617620 containerd[1472]: time="2024-11-12T22:40:56.617563263Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 22:40:57.654067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3508745364.mount: Deactivated successfully. Nov 12 22:40:59.571977 containerd[1472]: time="2024-11-12T22:40:59.571896393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:59.574257 containerd[1472]: time="2024-11-12T22:40:59.574158427Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 22:40:59.576701 containerd[1472]: time="2024-11-12T22:40:59.576611799Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:59.584842 containerd[1472]: time="2024-11-12T22:40:59.584738145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:59.586590 containerd[1472]: time="2024-11-12T22:40:59.586466948Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 2.968856697s" Nov 12 22:40:59.586590 containerd[1472]: time="2024-11-12T22:40:59.586506012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 22:40:59.632608 containerd[1472]: time="2024-11-12T22:40:59.632538382Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 22:41:00.268198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 22:41:00.281177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:41:00.891191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:41:00.893315 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:41:01.175248 kubelet[1935]: E1112 22:41:01.174208 1935 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:41:01.185899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:41:01.186229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:41:02.637132 containerd[1472]: time="2024-11-12T22:41:02.637050240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:02.638121 containerd[1472]: time="2024-11-12T22:41:02.638042041Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 22:41:02.639770 containerd[1472]: time="2024-11-12T22:41:02.639706824Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:02.643246 containerd[1472]: time="2024-11-12T22:41:02.643205959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:02.646563 containerd[1472]: time="2024-11-12T22:41:02.645356213Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 3.012752769s" Nov 12 22:41:02.646563 containerd[1472]: time="2024-11-12T22:41:02.645394465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 22:41:02.672510 containerd[1472]: time="2024-11-12T22:41:02.672468377Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 22:41:04.105618 containerd[1472]: time="2024-11-12T22:41:04.105507862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:04.106386 containerd[1472]: time="2024-11-12T22:41:04.106309435Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 22:41:04.107624 containerd[1472]: time="2024-11-12T22:41:04.107585910Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:04.110703 containerd[1472]: time="2024-11-12T22:41:04.110654377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:04.112036 containerd[1472]: time="2024-11-12T22:41:04.111979633Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.439285593s" Nov 12 22:41:04.112036 containerd[1472]: time="2024-11-12T22:41:04.112017504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 22:41:04.141415 containerd[1472]: time="2024-11-12T22:41:04.141363229Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 22:41:05.379936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1184685850.mount: Deactivated successfully. Nov 12 22:41:06.175242 containerd[1472]: time="2024-11-12T22:41:06.175162001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:06.176002 containerd[1472]: time="2024-11-12T22:41:06.175916556Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 22:41:06.177181 containerd[1472]: time="2024-11-12T22:41:06.177152395Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:06.179260 containerd[1472]: time="2024-11-12T22:41:06.179222949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:06.179936 containerd[1472]: time="2024-11-12T22:41:06.179883719Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 2.038474714s" Nov 12 22:41:06.180001 containerd[1472]: time="2024-11-12T22:41:06.179935476Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 22:41:06.208871 containerd[1472]: time="2024-11-12T22:41:06.208818933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 22:41:06.774796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771885615.mount: Deactivated successfully. Nov 12 22:41:08.133519 containerd[1472]: time="2024-11-12T22:41:08.133427827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:08.134210 containerd[1472]: time="2024-11-12T22:41:08.134159981Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 22:41:08.135478 containerd[1472]: time="2024-11-12T22:41:08.135432798Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:08.138735 containerd[1472]: time="2024-11-12T22:41:08.138676534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:08.139751 containerd[1472]: time="2024-11-12T22:41:08.139714952Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.93086474s" Nov 12 22:41:08.139809 containerd[1472]: time="2024-11-12T22:41:08.139752663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 22:41:08.166987 containerd[1472]: time="2024-11-12T22:41:08.166934428Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 22:41:08.862230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078304804.mount: Deactivated successfully. Nov 12 22:41:08.869772 containerd[1472]: time="2024-11-12T22:41:08.869706248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:08.870559 containerd[1472]: time="2024-11-12T22:41:08.870475882Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 22:41:08.871915 containerd[1472]: time="2024-11-12T22:41:08.871882370Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:08.874655 containerd[1472]: time="2024-11-12T22:41:08.874612332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:08.875635 containerd[1472]: time="2024-11-12T22:41:08.875575569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 708.593462ms" Nov 12 22:41:08.875635 containerd[1472]: time="2024-11-12T22:41:08.875630783Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 22:41:08.902324 containerd[1472]: time="2024-11-12T22:41:08.902277524Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 22:41:10.172823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165893389.mount: Deactivated successfully. Nov 12 22:41:11.268058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 22:41:11.277568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:41:11.740532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:41:11.742037 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:41:11.838874 kubelet[2092]: E1112 22:41:11.838776 2092 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:41:11.875741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:41:11.875999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:41:12.689691 containerd[1472]: time="2024-11-12T22:41:12.689607966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:12.704677 containerd[1472]: time="2024-11-12T22:41:12.704584934Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 22:41:12.710719 containerd[1472]: time="2024-11-12T22:41:12.710665532Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:12.719177 containerd[1472]: time="2024-11-12T22:41:12.719129270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:12.720395 containerd[1472]: time="2024-11-12T22:41:12.720333840Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.817783133s" Nov 12 22:41:12.720395 containerd[1472]: time="2024-11-12T22:41:12.720389424Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 22:41:15.372327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:41:15.381694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:41:15.402216 systemd[1]: Reloading requested from client PID 2184 ('systemctl') (unit session-7.scope)... Nov 12 22:41:15.402245 systemd[1]: Reloading... Nov 12 22:41:15.519377 zram_generator::config[2226]: No configuration found. Nov 12 22:41:15.840904 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:41:15.921395 systemd[1]: Reloading finished in 518 ms. Nov 12 22:41:15.976069 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 22:41:15.976167 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 22:41:15.976509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:41:15.979042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:41:16.126149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:41:16.143754 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:41:16.209844 kubelet[2272]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:41:16.209844 kubelet[2272]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:41:16.209844 kubelet[2272]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:41:16.210286 kubelet[2272]: I1112 22:41:16.209905 2272 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:41:16.621531 kubelet[2272]: I1112 22:41:16.621472 2272 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 22:41:16.621531 kubelet[2272]: I1112 22:41:16.621518 2272 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:41:16.621802 kubelet[2272]: I1112 22:41:16.621786 2272 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 22:41:16.644319 kubelet[2272]: E1112 22:41:16.644268 2272 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:16.645728 kubelet[2272]: I1112 22:41:16.645681 2272 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:41:16.658524 kubelet[2272]: I1112 22:41:16.658240 2272 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:41:16.658811 kubelet[2272]: I1112 22:41:16.658779 2272 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:41:16.659014 kubelet[2272]: I1112 22:41:16.658986 2272 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:41:16.659152 kubelet[2272]: I1112 22:41:16.659017 2272 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:41:16.659152 kubelet[2272]: I1112 22:41:16.659027 2272 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:41:16.659209 kubelet[2272]: I1112 22:41:16.659186 2272 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:41:16.659364 kubelet[2272]: I1112 22:41:16.659307 2272 kubelet.go:396] "Attempting to sync node with API server" Nov 12 22:41:16.659364 kubelet[2272]: I1112 22:41:16.659355 2272 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:41:16.659434 kubelet[2272]: I1112 22:41:16.659395 2272 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:41:16.659434 kubelet[2272]: I1112 22:41:16.659418 2272 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:41:16.660868 kubelet[2272]: W1112 22:41:16.660699 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:16.660868 kubelet[2272]: E1112 22:41:16.660767 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:16.660868 kubelet[2272]: W1112 22:41:16.660839 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:16.660868 kubelet[2272]: E1112 22:41:16.660873 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:16.661366 kubelet[2272]: I1112 22:41:16.661314 2272 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:41:16.664518 kubelet[2272]: I1112 22:41:16.664311 2272 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:41:16.664518 kubelet[2272]: W1112 22:41:16.664430 2272 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 22:41:16.665531 kubelet[2272]: I1112 22:41:16.665506 2272 server.go:1256] "Started kubelet" Nov 12 22:41:16.665778 kubelet[2272]: I1112 22:41:16.665637 2272 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:41:16.666416 kubelet[2272]: I1112 22:41:16.666383 2272 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:41:16.667660 kubelet[2272]: I1112 22:41:16.667055 2272 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:41:16.668387 kubelet[2272]: I1112 22:41:16.668214 2272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:41:16.669210 kubelet[2272]: I1112 22:41:16.668884 2272 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:41:16.669806 kubelet[2272]: I1112 22:41:16.669768 2272 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 22:41:16.669889 kubelet[2272]: I1112 22:41:16.669867 2272 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 22:41:16.670167 kubelet[2272]: I1112 22:41:16.670124 2272 server.go:461] "Adding debug handlers to kubelet server" Nov 12 22:41:16.671769 kubelet[2272]: E1112 22:41:16.671662 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="200ms" Nov 12 22:41:16.671769 kubelet[2272]: W1112 22:41:16.671722 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:16.671769 kubelet[2272]: E1112 22:41:16.671761 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:16.674409 kubelet[2272]: E1112 22:41:16.674382 2272 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.30:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.30:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.180759c3338a97d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:41:16.66546069 +0000 UTC m=+0.516815544,LastTimestamp:2024-11-12 22:41:16.66546069 +0000 UTC m=+0.516815544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:41:16.675026 kubelet[2272]: I1112 22:41:16.675003 2272 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:41:16.675026 kubelet[2272]: I1112 22:41:16.675024 2272 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:41:16.675124 kubelet[2272]: I1112 22:41:16.675103 2272 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:41:16.691067 kubelet[2272]: I1112 22:41:16.691025 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:41:16.692927 kubelet[2272]: I1112 22:41:16.692894 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:41:16.692927 kubelet[2272]: I1112 22:41:16.692929 2272 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:41:16.693023 kubelet[2272]: I1112 22:41:16.692954 2272 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 22:41:16.693023 kubelet[2272]: E1112 22:41:16.693010 2272 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:41:16.693990 kubelet[2272]: I1112 22:41:16.693602 2272 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:41:16.693990 kubelet[2272]: I1112 22:41:16.693625 2272 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:41:16.693990 kubelet[2272]: I1112 22:41:16.693650 2272 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:41:16.693990 kubelet[2272]: W1112 22:41:16.693727 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:16.693990 kubelet[2272]: E1112 22:41:16.693778 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:16.701662 kubelet[2272]: I1112 22:41:16.701612 2272 policy_none.go:49] "None policy: Start" Nov 12 22:41:16.702327 kubelet[2272]: I1112 22:41:16.702290 2272 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:41:16.702384 kubelet[2272]: I1112 22:41:16.702329 2272 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:41:16.716155 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 22:41:16.730214 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 22:41:16.733765 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 22:41:16.743376 kubelet[2272]: I1112 22:41:16.743317 2272 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:41:16.743760 kubelet[2272]: I1112 22:41:16.743732 2272 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:41:16.744743 kubelet[2272]: E1112 22:41:16.744720 2272 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 22:41:16.771081 kubelet[2272]: I1112 22:41:16.771048 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:41:16.771543 kubelet[2272]: E1112 22:41:16.771521 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Nov 12 22:41:16.794034 kubelet[2272]: I1112 22:41:16.793930 2272 topology_manager.go:215] "Topology Admit Handler" podUID="aa07abae1718585244e41c6691fccd27" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:41:16.795545 kubelet[2272]: I1112 22:41:16.795490 2272 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:41:16.796450 kubelet[2272]: I1112 22:41:16.796423 2272 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:41:16.803031 systemd[1]: Created slice kubepods-burstable-podaa07abae1718585244e41c6691fccd27.slice - libcontainer container kubepods-burstable-podaa07abae1718585244e41c6691fccd27.slice. Nov 12 22:41:16.819164 systemd[1]: Created slice kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice - libcontainer container kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice. Nov 12 22:41:16.831724 systemd[1]: Created slice kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice - libcontainer container kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice. Nov 12 22:41:16.872557 kubelet[2272]: E1112 22:41:16.872402 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="400ms" Nov 12 22:41:16.971891 kubelet[2272]: I1112 22:41:16.971815 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa07abae1718585244e41c6691fccd27-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa07abae1718585244e41c6691fccd27\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:41:16.971891 kubelet[2272]: I1112 22:41:16.971896 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa07abae1718585244e41c6691fccd27-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa07abae1718585244e41c6691fccd27\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:41:16.972086 kubelet[2272]: I1112 22:41:16.971944 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:41:16.972086 kubelet[2272]: I1112 22:41:16.971987 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:41:16.972086 kubelet[2272]: I1112 22:41:16.972013 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:41:16.972086 kubelet[2272]: I1112 22:41:16.972042 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:41:16.972086 kubelet[2272]: I1112 22:41:16.972070 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa07abae1718585244e41c6691fccd27-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa07abae1718585244e41c6691fccd27\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:41:16.972281 kubelet[2272]: I1112 22:41:16.972104 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:41:16.972281 kubelet[2272]: I1112 22:41:16.972130 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:41:16.973101 kubelet[2272]: I1112 22:41:16.973075 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:41:16.973534 kubelet[2272]: E1112 22:41:16.973509 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Nov 12 22:41:17.119713 kubelet[2272]: E1112 22:41:17.119644 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:17.120764 containerd[1472]: time="2024-11-12T22:41:17.120689952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa07abae1718585244e41c6691fccd27,Namespace:kube-system,Attempt:0,}" Nov 12 22:41:17.130402 kubelet[2272]: E1112 22:41:17.130167 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:17.130976 containerd[1472]: time="2024-11-12T22:41:17.130919323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 22:41:17.134902 kubelet[2272]: E1112 22:41:17.134795 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:17.135628 containerd[1472]: time="2024-11-12T22:41:17.135524052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 22:41:17.273622 kubelet[2272]: E1112 22:41:17.273571 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="800ms" Nov 12 22:41:17.375570 kubelet[2272]: I1112 22:41:17.375521 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:41:17.376023 kubelet[2272]: E1112 22:41:17.375986 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Nov 12 22:41:17.645047 kubelet[2272]: W1112 22:41:17.644934 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:17.645047 kubelet[2272]: E1112 22:41:17.645029 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:17.742047 kubelet[2272]: W1112 22:41:17.741969 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:17.742047 kubelet[2272]: E1112 22:41:17.742032 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:17.965193 kubelet[2272]: W1112 22:41:17.964998 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:17.965193 kubelet[2272]: E1112 22:41:17.965071 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:18.054379 kubelet[2272]: W1112 22:41:18.054269 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:18.054379 kubelet[2272]: E1112 22:41:18.054381 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:18.074926 kubelet[2272]: E1112 22:41:18.074878 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="1.6s" Nov 12 22:41:18.178226 kubelet[2272]: I1112 22:41:18.178154 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:41:18.178729 kubelet[2272]: E1112 22:41:18.178675 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Nov 12 22:41:18.544326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847694398.mount: Deactivated successfully. Nov 12 22:41:18.620638 containerd[1472]: time="2024-11-12T22:41:18.620548003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:41:18.625002 containerd[1472]: time="2024-11-12T22:41:18.624951615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 22:41:18.635264 containerd[1472]: time="2024-11-12T22:41:18.635221802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:41:18.649666 containerd[1472]: time="2024-11-12T22:41:18.649614253Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:41:18.655783 containerd[1472]: time="2024-11-12T22:41:18.655741809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:41:18.661649 containerd[1472]: time="2024-11-12T22:41:18.661586955Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:41:18.663118 containerd[1472]: time="2024-11-12T22:41:18.663082010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:41:18.668426 containerd[1472]: time="2024-11-12T22:41:18.668387724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:41:18.669578 containerd[1472]: time="2024-11-12T22:41:18.669512674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.548685635s" Nov 12 22:41:18.670251 containerd[1472]: time="2024-11-12T22:41:18.670217016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.534564633s" Nov 12 22:41:18.691217 containerd[1472]: time="2024-11-12T22:41:18.691150508Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.560106772s" Nov 12 22:41:18.826457 kubelet[2272]: E1112 22:41:18.825651 2272 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.30:6443: connect: connection refused Nov 12 22:41:18.972793 containerd[1472]: time="2024-11-12T22:41:18.972667404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:41:18.972793 containerd[1472]: time="2024-11-12T22:41:18.972735572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:41:18.972793 containerd[1472]: time="2024-11-12T22:41:18.972765298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:18.972987 containerd[1472]: time="2024-11-12T22:41:18.972878510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:18.981225 containerd[1472]: time="2024-11-12T22:41:18.981110895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:41:18.981391 containerd[1472]: time="2024-11-12T22:41:18.981264152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:41:18.981391 containerd[1472]: time="2024-11-12T22:41:18.981317052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:18.981623 containerd[1472]: time="2024-11-12T22:41:18.981471151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:18.993863 containerd[1472]: time="2024-11-12T22:41:18.993654908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:41:18.993863 containerd[1472]: time="2024-11-12T22:41:18.993722395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:41:18.993863 containerd[1472]: time="2024-11-12T22:41:18.993737243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:18.993863 containerd[1472]: time="2024-11-12T22:41:18.993832081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:18.997597 systemd[1]: Started cri-containerd-a5a13e6618747e92a8b07c180a63c48ebcdc14b47259757a70e01b859ce971a6.scope - libcontainer container a5a13e6618747e92a8b07c180a63c48ebcdc14b47259757a70e01b859ce971a6. Nov 12 22:41:19.027512 systemd[1]: Started cri-containerd-785d2b57fcbcc122b193fa0e63550ef6b0529aedf9bc7b07b6cab1212aa2cd06.scope - libcontainer container 785d2b57fcbcc122b193fa0e63550ef6b0529aedf9bc7b07b6cab1212aa2cd06. Nov 12 22:41:19.032989 systemd[1]: Started cri-containerd-66d86b71efdc7153edc0083546de1a4d019e4e3cc8377a090b0c4beac3e4ee9f.scope - libcontainer container 66d86b71efdc7153edc0083546de1a4d019e4e3cc8377a090b0c4beac3e4ee9f. Nov 12 22:41:19.075531 containerd[1472]: time="2024-11-12T22:41:19.075445217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5a13e6618747e92a8b07c180a63c48ebcdc14b47259757a70e01b859ce971a6\"" Nov 12 22:41:19.077395 kubelet[2272]: E1112 22:41:19.076854 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:19.082393 containerd[1472]: time="2024-11-12T22:41:19.081114960Z" level=info msg="CreateContainer within sandbox \"a5a13e6618747e92a8b07c180a63c48ebcdc14b47259757a70e01b859ce971a6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 22:41:19.096295 containerd[1472]: time="2024-11-12T22:41:19.096238696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa07abae1718585244e41c6691fccd27,Namespace:kube-system,Attempt:0,} returns sandbox id \"785d2b57fcbcc122b193fa0e63550ef6b0529aedf9bc7b07b6cab1212aa2cd06\"" Nov 12 22:41:19.097019 kubelet[2272]: E1112 22:41:19.096982 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:19.099521 containerd[1472]: time="2024-11-12T22:41:19.099486274Z" level=info msg="CreateContainer within sandbox \"785d2b57fcbcc122b193fa0e63550ef6b0529aedf9bc7b07b6cab1212aa2cd06\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 22:41:19.111937 containerd[1472]: time="2024-11-12T22:41:19.111768651Z" level=info msg="CreateContainer within sandbox \"a5a13e6618747e92a8b07c180a63c48ebcdc14b47259757a70e01b859ce971a6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"31e6ed0befd61637e5d947979c45740c27cf0069b81ab70e67ec8ef8746f795b\"" Nov 12 22:41:19.112539 containerd[1472]: time="2024-11-12T22:41:19.112480268Z" level=info msg="StartContainer for \"31e6ed0befd61637e5d947979c45740c27cf0069b81ab70e67ec8ef8746f795b\"" Nov 12 22:41:19.115311 containerd[1472]: time="2024-11-12T22:41:19.115278904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"66d86b71efdc7153edc0083546de1a4d019e4e3cc8377a090b0c4beac3e4ee9f\"" Nov 12 22:41:19.115913 kubelet[2272]: E1112 22:41:19.115787 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:19.118382 containerd[1472]: time="2024-11-12T22:41:19.118316731Z" level=info msg="CreateContainer within sandbox \"66d86b71efdc7153edc0083546de1a4d019e4e3cc8377a090b0c4beac3e4ee9f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 22:41:19.121576 containerd[1472]: time="2024-11-12T22:41:19.121537558Z" level=info msg="CreateContainer within sandbox \"785d2b57fcbcc122b193fa0e63550ef6b0529aedf9bc7b07b6cab1212aa2cd06\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"344d5ce4460ab6dfd3fcc6f9c06004c0aef7775d6d792df07441cd2d2c417729\"" Nov 12 22:41:19.122666 containerd[1472]: time="2024-11-12T22:41:19.122632752Z" level=info msg="StartContainer for \"344d5ce4460ab6dfd3fcc6f9c06004c0aef7775d6d792df07441cd2d2c417729\"" Nov 12 22:41:19.139514 containerd[1472]: time="2024-11-12T22:41:19.139470600Z" level=info msg="CreateContainer within sandbox \"66d86b71efdc7153edc0083546de1a4d019e4e3cc8377a090b0c4beac3e4ee9f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4e1ec92fc9e3aefdf6f3077734cd1f222076a8652f25d7996e25f7df21675960\"" Nov 12 22:41:19.141034 containerd[1472]: time="2024-11-12T22:41:19.140983136Z" level=info msg="StartContainer for \"4e1ec92fc9e3aefdf6f3077734cd1f222076a8652f25d7996e25f7df21675960\"" Nov 12 22:41:19.147077 systemd[1]: Started cri-containerd-31e6ed0befd61637e5d947979c45740c27cf0069b81ab70e67ec8ef8746f795b.scope - libcontainer container 31e6ed0befd61637e5d947979c45740c27cf0069b81ab70e67ec8ef8746f795b. Nov 12 22:41:19.158487 systemd[1]: Started cri-containerd-344d5ce4460ab6dfd3fcc6f9c06004c0aef7775d6d792df07441cd2d2c417729.scope - libcontainer container 344d5ce4460ab6dfd3fcc6f9c06004c0aef7775d6d792df07441cd2d2c417729. Nov 12 22:41:19.199539 systemd[1]: Started cri-containerd-4e1ec92fc9e3aefdf6f3077734cd1f222076a8652f25d7996e25f7df21675960.scope - libcontainer container 4e1ec92fc9e3aefdf6f3077734cd1f222076a8652f25d7996e25f7df21675960. Nov 12 22:41:19.245645 containerd[1472]: time="2024-11-12T22:41:19.245581194Z" level=info msg="StartContainer for \"31e6ed0befd61637e5d947979c45740c27cf0069b81ab70e67ec8ef8746f795b\" returns successfully" Nov 12 22:41:19.251597 containerd[1472]: time="2024-11-12T22:41:19.251545834Z" level=info msg="StartContainer for \"344d5ce4460ab6dfd3fcc6f9c06004c0aef7775d6d792df07441cd2d2c417729\" returns successfully" Nov 12 22:41:19.276185 containerd[1472]: time="2024-11-12T22:41:19.276107530Z" level=info msg="StartContainer for \"4e1ec92fc9e3aefdf6f3077734cd1f222076a8652f25d7996e25f7df21675960\" returns successfully" Nov 12 22:41:19.703425 kubelet[2272]: E1112 22:41:19.703360 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:19.707407 kubelet[2272]: E1112 22:41:19.706735 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:19.713196 kubelet[2272]: E1112 22:41:19.713158 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:19.780145 kubelet[2272]: I1112 22:41:19.779678 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:41:20.768578 kubelet[2272]: E1112 22:41:20.768521 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:20.916891 kubelet[2272]: E1112 22:41:20.916819 2272 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 22:41:20.999599 kubelet[2272]: I1112 22:41:20.999463 2272 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:41:21.667188 kubelet[2272]: I1112 22:41:21.667080 2272 apiserver.go:52] "Watching apiserver" Nov 12 22:41:21.670685 kubelet[2272]: I1112 22:41:21.670645 2272 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 22:41:23.746250 systemd[1]: Reloading requested from client PID 2555 ('systemctl') (unit session-7.scope)... Nov 12 22:41:23.746268 systemd[1]: Reloading... Nov 12 22:41:23.838408 zram_generator::config[2597]: No configuration found. Nov 12 22:41:23.951089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:41:24.046154 systemd[1]: Reloading finished in 299 ms. Nov 12 22:41:24.102573 kubelet[2272]: I1112 22:41:24.102480 2272 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:41:24.102602 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:41:24.114647 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:41:24.114939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:41:24.115007 systemd[1]: kubelet.service: Consumed 1.105s CPU time, 116.0M memory peak, 0B memory swap peak. Nov 12 22:41:24.127755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:41:24.297013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:41:24.303119 (kubelet)[2639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:41:24.656475 kubelet[2639]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:41:24.656475 kubelet[2639]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:41:24.656475 kubelet[2639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:41:24.656475 kubelet[2639]: I1112 22:41:24.655485 2639 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:41:24.660539 sudo[2652]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 22:41:24.661002 kubelet[2639]: I1112 22:41:24.660966 2639 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 22:41:24.661002 kubelet[2639]: I1112 22:41:24.660986 2639 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:41:24.661017 sudo[2652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 22:41:24.661227 kubelet[2639]: I1112 22:41:24.661200 2639 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 22:41:24.662592 kubelet[2639]: I1112 22:41:24.662560 2639 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 22:41:24.664684 kubelet[2639]: I1112 22:41:24.664632 2639 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:41:24.675084 kubelet[2639]: I1112 22:41:24.675046 2639 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:41:24.675352 kubelet[2639]: I1112 22:41:24.675322 2639 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:41:24.675537 kubelet[2639]: I1112 22:41:24.675509 2639 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:41:24.675638 kubelet[2639]: I1112 22:41:24.675541 2639 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:41:24.675638 kubelet[2639]: I1112 22:41:24.675551 2639 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:41:24.675638 kubelet[2639]: I1112 22:41:24.675585 2639 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:41:24.675710 kubelet[2639]: I1112 22:41:24.675701 2639 kubelet.go:396] "Attempting to sync node with API server" Nov 12 22:41:24.675732 kubelet[2639]: I1112 22:41:24.675716 2639 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:41:24.678356 kubelet[2639]: I1112 22:41:24.675753 2639 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:41:24.678356 kubelet[2639]: I1112 22:41:24.675786 2639 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:41:24.678356 kubelet[2639]: I1112 22:41:24.676730 2639 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:41:24.678356 kubelet[2639]: I1112 22:41:24.676931 2639 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:41:24.678356 kubelet[2639]: I1112 22:41:24.677332 2639 server.go:1256] "Started kubelet" Nov 12 22:41:24.678697 kubelet[2639]: I1112 22:41:24.678671 2639 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:41:24.678962 kubelet[2639]: I1112 22:41:24.678925 2639 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:41:24.678962 kubelet[2639]: I1112 22:41:24.678940 2639 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:41:24.679852 kubelet[2639]: I1112 22:41:24.679825 2639 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:41:24.679930 kubelet[2639]: I1112 22:41:24.679905 2639 server.go:461] "Adding debug handlers to kubelet server" Nov 12 22:41:24.680546 kubelet[2639]: I1112 22:41:24.680527 2639 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:41:24.680716 kubelet[2639]: I1112 22:41:24.680701 2639 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 22:41:24.680938 kubelet[2639]: I1112 22:41:24.680925 2639 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 22:41:24.684369 kubelet[2639]: I1112 22:41:24.683432 2639 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:41:24.684369 kubelet[2639]: I1112 22:41:24.683514 2639 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:41:24.691926 kubelet[2639]: I1112 22:41:24.691889 2639 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:41:24.700067 kubelet[2639]: E1112 22:41:24.699120 2639 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:41:24.714105 kubelet[2639]: I1112 22:41:24.714063 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:41:24.717038 kubelet[2639]: I1112 22:41:24.716995 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:41:24.717403 kubelet[2639]: I1112 22:41:24.717384 2639 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:41:24.717499 kubelet[2639]: I1112 22:41:24.717486 2639 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 22:41:24.717928 kubelet[2639]: E1112 22:41:24.717900 2639 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:41:24.744366 kubelet[2639]: I1112 22:41:24.744307 2639 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:41:24.744366 kubelet[2639]: I1112 22:41:24.744334 2639 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:41:24.744366 kubelet[2639]: I1112 22:41:24.744370 2639 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:41:24.744570 kubelet[2639]: I1112 22:41:24.744546 2639 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 22:41:24.744597 kubelet[2639]: I1112 22:41:24.744572 2639 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 22:41:24.744597 kubelet[2639]: I1112 22:41:24.744581 2639 policy_none.go:49] "None policy: Start" Nov 12 22:41:24.745914 kubelet[2639]: I1112 22:41:24.745710 2639 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:41:24.745914 kubelet[2639]: I1112 22:41:24.745746 2639 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:41:24.746212 kubelet[2639]: I1112 22:41:24.746196 2639 state_mem.go:75] "Updated machine memory state" Nov 12 22:41:24.750892 kubelet[2639]: I1112 22:41:24.750873 2639 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:41:24.751577 kubelet[2639]: I1112 22:41:24.751563 2639 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:41:24.786399 kubelet[2639]: I1112 22:41:24.786351 2639 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:41:24.793757 kubelet[2639]: I1112 22:41:24.793711 2639 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 22:41:24.793904 kubelet[2639]: I1112 22:41:24.793821 2639 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:41:24.818272 kubelet[2639]: I1112 22:41:24.818199 2639 topology_manager.go:215] "Topology Admit Handler" podUID="aa07abae1718585244e41c6691fccd27" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:41:24.818465 kubelet[2639]: I1112 22:41:24.818371 2639 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:41:24.818465 kubelet[2639]: I1112 22:41:24.818441 2639 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:41:24.982269 kubelet[2639]: I1112 22:41:24.982119 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:41:24.982269 kubelet[2639]: I1112 22:41:24.982176 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:41:24.982269 kubelet[2639]: I1112 22:41:24.982208 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:41:24.982269 kubelet[2639]: I1112 22:41:24.982232 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:41:24.982545 kubelet[2639]: I1112 22:41:24.982311 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa07abae1718585244e41c6691fccd27-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa07abae1718585244e41c6691fccd27\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:41:24.982545 kubelet[2639]: I1112 22:41:24.982366 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa07abae1718585244e41c6691fccd27-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa07abae1718585244e41c6691fccd27\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:41:24.982545 kubelet[2639]: I1112 22:41:24.982397 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa07abae1718585244e41c6691fccd27-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa07abae1718585244e41c6691fccd27\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:41:24.982545 kubelet[2639]: I1112 22:41:24.982416 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:41:24.982545 kubelet[2639]: I1112 22:41:24.982449 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:41:25.128698 kubelet[2639]: E1112 22:41:25.128657 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:25.129251 kubelet[2639]: E1112 22:41:25.129215 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:25.129778 kubelet[2639]: E1112 22:41:25.129733 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:25.201718 sudo[2652]: pam_unix(sudo:session): session closed for user root Nov 12 22:41:25.676363 kubelet[2639]: I1112 22:41:25.676274 2639 apiserver.go:52] "Watching apiserver" Nov 12 22:41:25.681121 kubelet[2639]: I1112 22:41:25.681093 2639 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 22:41:25.736280 kubelet[2639]: E1112 22:41:25.736232 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:25.736280 kubelet[2639]: E1112 22:41:25.736282 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:25.737659 kubelet[2639]: E1112 22:41:25.737522 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:25.768853 kubelet[2639]: I1112 22:41:25.768795 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.76874104 podStartE2EDuration="1.76874104s" podCreationTimestamp="2024-11-12 22:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:41:25.762303456 +0000 UTC m=+1.430647842" watchObservedRunningTime="2024-11-12 22:41:25.76874104 +0000 UTC m=+1.437085426" Nov 12 22:41:25.769097 kubelet[2639]: I1112 22:41:25.768900 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7688761469999998 podStartE2EDuration="1.768876147s" podCreationTimestamp="2024-11-12 22:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:41:25.768667941 +0000 UTC m=+1.437012327" watchObservedRunningTime="2024-11-12 22:41:25.768876147 +0000 UTC m=+1.437220533" Nov 12 22:41:25.776521 kubelet[2639]: I1112 22:41:25.776449 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.776405854 podStartE2EDuration="1.776405854s" podCreationTimestamp="2024-11-12 22:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:41:25.775945807 +0000 UTC m=+1.444290213" watchObservedRunningTime="2024-11-12 22:41:25.776405854 +0000 UTC m=+1.444750240" Nov 12 22:41:26.737936 kubelet[2639]: E1112 22:41:26.737882 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:27.021893 sudo[1645]: pam_unix(sudo:session): session closed for user root Nov 12 22:41:27.023447 sshd[1644]: Connection closed by 10.0.0.1 port 50938 Nov 12 22:41:27.030504 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:27.035243 systemd[1]: sshd@6-10.0.0.30:22-10.0.0.1:50938.service: Deactivated successfully. Nov 12 22:41:27.037394 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 22:41:27.037601 systemd[1]: session-7.scope: Consumed 6.081s CPU time, 185.9M memory peak, 0B memory swap peak. Nov 12 22:41:27.038019 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Nov 12 22:41:27.039413 systemd-logind[1445]: Removed session 7. Nov 12 22:41:28.554544 kubelet[2639]: E1112 22:41:28.554489 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:32.246409 kubelet[2639]: E1112 22:41:32.246330 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:32.527685 update_engine[1447]: I20241112 22:41:32.527580 1447 update_attempter.cc:509] Updating boot flags... Nov 12 22:41:32.557397 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2723) Nov 12 22:41:32.602370 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2727) Nov 12 22:41:32.648383 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2727) Nov 12 22:41:32.747256 kubelet[2639]: E1112 22:41:32.747216 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:33.681571 kubelet[2639]: E1112 22:41:33.681508 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:33.748413 kubelet[2639]: E1112 22:41:33.748373 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:38.560397 kubelet[2639]: E1112 22:41:38.559971 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:38.756968 kubelet[2639]: E1112 22:41:38.756924 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:38.971223 kubelet[2639]: I1112 22:41:38.971097 2639 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 22:41:38.971637 containerd[1472]: time="2024-11-12T22:41:38.971575603Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 22:41:38.972469 kubelet[2639]: I1112 22:41:38.972437 2639 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 22:41:39.233497 kubelet[2639]: I1112 22:41:39.232802 2639 topology_manager.go:215] "Topology Admit Handler" podUID="550844d3-5e13-4b40-b1f3-bdfd5861d458" podNamespace="kube-system" podName="kube-proxy-5v55j" Nov 12 22:41:39.236819 kubelet[2639]: I1112 22:41:39.236764 2639 topology_manager.go:215] "Topology Admit Handler" podUID="17e10f0a-edfa-4789-8017-df44930f0e11" podNamespace="kube-system" podName="cilium-rdpm9" Nov 12 22:41:39.247068 systemd[1]: Created slice kubepods-besteffort-pod550844d3_5e13_4b40_b1f3_bdfd5861d458.slice - libcontainer container kubepods-besteffort-pod550844d3_5e13_4b40_b1f3_bdfd5861d458.slice. Nov 12 22:41:39.265657 kubelet[2639]: I1112 22:41:39.265607 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/550844d3-5e13-4b40-b1f3-bdfd5861d458-xtables-lock\") pod \"kube-proxy-5v55j\" (UID: \"550844d3-5e13-4b40-b1f3-bdfd5861d458\") " pod="kube-system/kube-proxy-5v55j" Nov 12 22:41:39.265657 kubelet[2639]: I1112 22:41:39.265652 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-hostproc\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.265657 kubelet[2639]: I1112 22:41:39.265673 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxw96\" (UniqueName: \"kubernetes.io/projected/17e10f0a-edfa-4789-8017-df44930f0e11-kube-api-access-sxw96\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.265888 kubelet[2639]: I1112 22:41:39.265692 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-run\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.265888 kubelet[2639]: I1112 22:41:39.265709 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-lib-modules\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.265888 kubelet[2639]: I1112 22:41:39.265730 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-host-proc-sys-kernel\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.265888 kubelet[2639]: I1112 22:41:39.265747 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-xtables-lock\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.265888 kubelet[2639]: I1112 22:41:39.265763 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17e10f0a-edfa-4789-8017-df44930f0e11-clustermesh-secrets\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.265888 kubelet[2639]: I1112 22:41:39.265782 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/550844d3-5e13-4b40-b1f3-bdfd5861d458-kube-proxy\") pod \"kube-proxy-5v55j\" (UID: \"550844d3-5e13-4b40-b1f3-bdfd5861d458\") " pod="kube-system/kube-proxy-5v55j" Nov 12 22:41:39.266029 kubelet[2639]: I1112 22:41:39.265799 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/550844d3-5e13-4b40-b1f3-bdfd5861d458-lib-modules\") pod \"kube-proxy-5v55j\" (UID: \"550844d3-5e13-4b40-b1f3-bdfd5861d458\") " pod="kube-system/kube-proxy-5v55j" Nov 12 22:41:39.266029 kubelet[2639]: I1112 22:41:39.265816 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrjrf\" (UniqueName: \"kubernetes.io/projected/550844d3-5e13-4b40-b1f3-bdfd5861d458-kube-api-access-zrjrf\") pod \"kube-proxy-5v55j\" (UID: \"550844d3-5e13-4b40-b1f3-bdfd5861d458\") " pod="kube-system/kube-proxy-5v55j" Nov 12 22:41:39.266029 kubelet[2639]: I1112 22:41:39.265832 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cni-path\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.266029 kubelet[2639]: I1112 22:41:39.265850 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-etc-cni-netd\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.266029 kubelet[2639]: I1112 22:41:39.265868 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-config-path\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.266154 kubelet[2639]: I1112 22:41:39.265886 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-host-proc-sys-net\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.266154 kubelet[2639]: I1112 22:41:39.265903 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-cgroup\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.266154 kubelet[2639]: I1112 22:41:39.265919 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17e10f0a-edfa-4789-8017-df44930f0e11-hubble-tls\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.266154 kubelet[2639]: I1112 22:41:39.266006 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-bpf-maps\") pod \"cilium-rdpm9\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " pod="kube-system/cilium-rdpm9" Nov 12 22:41:39.269813 systemd[1]: Created slice kubepods-burstable-pod17e10f0a_edfa_4789_8017_df44930f0e11.slice - libcontainer container kubepods-burstable-pod17e10f0a_edfa_4789_8017_df44930f0e11.slice. Nov 12 22:41:39.427669 kubelet[2639]: I1112 22:41:39.427415 2639 topology_manager.go:215] "Topology Admit Handler" podUID="03a0bdc6-fc2e-484e-8071-df99015f9f3a" podNamespace="kube-system" podName="cilium-operator-5cc964979-gjp9t" Nov 12 22:41:39.443167 systemd[1]: Created slice kubepods-besteffort-pod03a0bdc6_fc2e_484e_8071_df99015f9f3a.slice - libcontainer container kubepods-besteffort-pod03a0bdc6_fc2e_484e_8071_df99015f9f3a.slice. Nov 12 22:41:39.468167 kubelet[2639]: I1112 22:41:39.467734 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03a0bdc6-fc2e-484e-8071-df99015f9f3a-cilium-config-path\") pod \"cilium-operator-5cc964979-gjp9t\" (UID: \"03a0bdc6-fc2e-484e-8071-df99015f9f3a\") " pod="kube-system/cilium-operator-5cc964979-gjp9t" Nov 12 22:41:39.468167 kubelet[2639]: I1112 22:41:39.467832 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vd6k\" (UniqueName: \"kubernetes.io/projected/03a0bdc6-fc2e-484e-8071-df99015f9f3a-kube-api-access-4vd6k\") pod \"cilium-operator-5cc964979-gjp9t\" (UID: \"03a0bdc6-fc2e-484e-8071-df99015f9f3a\") " pod="kube-system/cilium-operator-5cc964979-gjp9t" Nov 12 22:41:39.568581 kubelet[2639]: E1112 22:41:39.568522 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:39.569440 containerd[1472]: time="2024-11-12T22:41:39.569374282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5v55j,Uid:550844d3-5e13-4b40-b1f3-bdfd5861d458,Namespace:kube-system,Attempt:0,}" Nov 12 22:41:39.574190 kubelet[2639]: E1112 22:41:39.573993 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:39.574442 containerd[1472]: time="2024-11-12T22:41:39.574382311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdpm9,Uid:17e10f0a-edfa-4789-8017-df44930f0e11,Namespace:kube-system,Attempt:0,}" Nov 12 22:41:39.609391 containerd[1472]: time="2024-11-12T22:41:39.609213938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:41:39.609537 containerd[1472]: time="2024-11-12T22:41:39.609313347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:41:39.609537 containerd[1472]: time="2024-11-12T22:41:39.609443793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:39.609633 containerd[1472]: time="2024-11-12T22:41:39.609559442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:39.615040 containerd[1472]: time="2024-11-12T22:41:39.614696553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:41:39.615040 containerd[1472]: time="2024-11-12T22:41:39.614769961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:41:39.615040 containerd[1472]: time="2024-11-12T22:41:39.614785641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:39.615040 containerd[1472]: time="2024-11-12T22:41:39.614885770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:39.639536 systemd[1]: Started cri-containerd-ba092148f5e6fe535960617d8d726d13b1b6bc77359bc70f888eecc013a20628.scope - libcontainer container ba092148f5e6fe535960617d8d726d13b1b6bc77359bc70f888eecc013a20628. Nov 12 22:41:39.642582 systemd[1]: Started cri-containerd-b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1.scope - libcontainer container b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1. Nov 12 22:41:39.672705 containerd[1472]: time="2024-11-12T22:41:39.672568802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdpm9,Uid:17e10f0a-edfa-4789-8017-df44930f0e11,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\"" Nov 12 22:41:39.673987 kubelet[2639]: E1112 22:41:39.673711 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:39.674840 containerd[1472]: time="2024-11-12T22:41:39.674819082Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 22:41:39.677563 containerd[1472]: time="2024-11-12T22:41:39.677511487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5v55j,Uid:550844d3-5e13-4b40-b1f3-bdfd5861d458,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba092148f5e6fe535960617d8d726d13b1b6bc77359bc70f888eecc013a20628\"" Nov 12 22:41:39.678530 kubelet[2639]: E1112 22:41:39.678309 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:39.680720 containerd[1472]: time="2024-11-12T22:41:39.680658560Z" level=info msg="CreateContainer within sandbox \"ba092148f5e6fe535960617d8d726d13b1b6bc77359bc70f888eecc013a20628\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 22:41:39.703329 containerd[1472]: time="2024-11-12T22:41:39.703269220Z" level=info msg="CreateContainer within sandbox \"ba092148f5e6fe535960617d8d726d13b1b6bc77359bc70f888eecc013a20628\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"18467cb18b75620d464734b64e3b56073bd48d28659e4ded1525d035ef6277af\"" Nov 12 22:41:39.703999 containerd[1472]: time="2024-11-12T22:41:39.703966687Z" level=info msg="StartContainer for \"18467cb18b75620d464734b64e3b56073bd48d28659e4ded1525d035ef6277af\"" Nov 12 22:41:39.743844 systemd[1]: Started cri-containerd-18467cb18b75620d464734b64e3b56073bd48d28659e4ded1525d035ef6277af.scope - libcontainer container 18467cb18b75620d464734b64e3b56073bd48d28659e4ded1525d035ef6277af. Nov 12 22:41:39.748236 kubelet[2639]: E1112 22:41:39.748194 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:39.749803 containerd[1472]: time="2024-11-12T22:41:39.749734828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gjp9t,Uid:03a0bdc6-fc2e-484e-8071-df99015f9f3a,Namespace:kube-system,Attempt:0,}" Nov 12 22:41:39.786676 containerd[1472]: time="2024-11-12T22:41:39.786527791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:41:39.786676 containerd[1472]: time="2024-11-12T22:41:39.786626327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:41:39.786676 containerd[1472]: time="2024-11-12T22:41:39.786640995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:39.786896 containerd[1472]: time="2024-11-12T22:41:39.786745753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:39.791676 containerd[1472]: time="2024-11-12T22:41:39.791613064Z" level=info msg="StartContainer for \"18467cb18b75620d464734b64e3b56073bd48d28659e4ded1525d035ef6277af\" returns successfully" Nov 12 22:41:39.823453 systemd[1]: Started cri-containerd-12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4.scope - libcontainer container 12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4. Nov 12 22:41:39.866238 containerd[1472]: time="2024-11-12T22:41:39.866193928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gjp9t,Uid:03a0bdc6-fc2e-484e-8071-df99015f9f3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\"" Nov 12 22:41:39.868458 kubelet[2639]: E1112 22:41:39.868422 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:40.764287 kubelet[2639]: E1112 22:41:40.764229 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:40.773574 kubelet[2639]: I1112 22:41:40.773158 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5v55j" podStartSLOduration=1.77310736 podStartE2EDuration="1.77310736s" podCreationTimestamp="2024-11-12 22:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:41:40.772857398 +0000 UTC m=+16.441201794" watchObservedRunningTime="2024-11-12 22:41:40.77310736 +0000 UTC m=+16.441451736" Nov 12 22:41:41.768053 kubelet[2639]: E1112 22:41:41.768009 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:48.088627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2054293109.mount: Deactivated successfully. Nov 12 22:41:50.979204 containerd[1472]: time="2024-11-12T22:41:50.979114594Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:50.980529 containerd[1472]: time="2024-11-12T22:41:50.980488019Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735371" Nov 12 22:41:50.983312 containerd[1472]: time="2024-11-12T22:41:50.983256760Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:50.985821 containerd[1472]: time="2024-11-12T22:41:50.985719726Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.309907407s" Nov 12 22:41:50.985821 containerd[1472]: time="2024-11-12T22:41:50.985793195Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 12 22:41:50.986671 containerd[1472]: time="2024-11-12T22:41:50.986642062Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 22:41:50.990422 containerd[1472]: time="2024-11-12T22:41:50.990383134Z" level=info msg="CreateContainer within sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:41:51.017376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3369885173.mount: Deactivated successfully. Nov 12 22:41:51.024577 containerd[1472]: time="2024-11-12T22:41:51.024498655Z" level=info msg="CreateContainer within sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06\"" Nov 12 22:41:51.025212 containerd[1472]: time="2024-11-12T22:41:51.025142237Z" level=info msg="StartContainer for \"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06\"" Nov 12 22:41:51.058577 systemd[1]: Started cri-containerd-5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06.scope - libcontainer container 5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06. Nov 12 22:41:51.100324 systemd[1]: cri-containerd-5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06.scope: Deactivated successfully. Nov 12 22:41:51.151655 containerd[1472]: time="2024-11-12T22:41:51.151581228Z" level=info msg="StartContainer for \"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06\" returns successfully" Nov 12 22:41:51.955243 kubelet[2639]: E1112 22:41:51.955177 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:52.014686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06-rootfs.mount: Deactivated successfully. Nov 12 22:41:52.093322 containerd[1472]: time="2024-11-12T22:41:52.093252751Z" level=info msg="shim disconnected" id=5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06 namespace=k8s.io Nov 12 22:41:52.093322 containerd[1472]: time="2024-11-12T22:41:52.093307373Z" level=warning msg="cleaning up after shim disconnected" id=5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06 namespace=k8s.io Nov 12 22:41:52.093322 containerd[1472]: time="2024-11-12T22:41:52.093315729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:41:52.964882 kubelet[2639]: E1112 22:41:52.963298 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:52.968533 containerd[1472]: time="2024-11-12T22:41:52.968393617Z" level=info msg="CreateContainer within sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:41:53.063583 containerd[1472]: time="2024-11-12T22:41:53.063522555Z" level=info msg="CreateContainer within sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8\"" Nov 12 22:41:53.064668 containerd[1472]: time="2024-11-12T22:41:53.064589060Z" level=info msg="StartContainer for \"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8\"" Nov 12 22:41:53.104603 systemd[1]: Started cri-containerd-f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8.scope - libcontainer container f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8. Nov 12 22:41:53.155782 containerd[1472]: time="2024-11-12T22:41:53.155706742Z" level=info msg="StartContainer for \"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8\" returns successfully" Nov 12 22:41:53.165140 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:41:53.165463 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:41:53.165561 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:41:53.172776 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:41:53.172999 systemd[1]: cri-containerd-f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8.scope: Deactivated successfully. Nov 12 22:41:53.203936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8-rootfs.mount: Deactivated successfully. Nov 12 22:41:53.205292 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:41:53.221231 containerd[1472]: time="2024-11-12T22:41:53.221025894Z" level=info msg="shim disconnected" id=f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8 namespace=k8s.io Nov 12 22:41:53.221231 containerd[1472]: time="2024-11-12T22:41:53.221105814Z" level=warning msg="cleaning up after shim disconnected" id=f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8 namespace=k8s.io Nov 12 22:41:53.221231 containerd[1472]: time="2024-11-12T22:41:53.221117095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:41:53.966484 kubelet[2639]: E1112 22:41:53.966447 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:53.968443 containerd[1472]: time="2024-11-12T22:41:53.968386459Z" level=info msg="CreateContainer within sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:41:53.989306 containerd[1472]: time="2024-11-12T22:41:53.989225470Z" level=info msg="CreateContainer within sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb\"" Nov 12 22:41:53.989797 containerd[1472]: time="2024-11-12T22:41:53.989761158Z" level=info msg="StartContainer for \"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb\"" Nov 12 22:41:54.034682 systemd[1]: Started cri-containerd-4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb.scope - libcontainer container 4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb. Nov 12 22:41:54.083801 systemd[1]: cri-containerd-4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb.scope: Deactivated successfully. Nov 12 22:41:54.086065 systemd[1]: Started sshd@7-10.0.0.30:22-10.0.0.1:40644.service - OpenSSH per-connection server daemon (10.0.0.1:40644). Nov 12 22:41:54.086833 containerd[1472]: time="2024-11-12T22:41:54.086793197Z" level=info msg="StartContainer for \"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb\" returns successfully" Nov 12 22:41:54.108847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb-rootfs.mount: Deactivated successfully. Nov 12 22:41:54.113227 containerd[1472]: time="2024-11-12T22:41:54.113115884Z" level=info msg="shim disconnected" id=4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb namespace=k8s.io Nov 12 22:41:54.113227 containerd[1472]: time="2024-11-12T22:41:54.113178001Z" level=warning msg="cleaning up after shim disconnected" id=4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb namespace=k8s.io Nov 12 22:41:54.113227 containerd[1472]: time="2024-11-12T22:41:54.113186217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:41:54.153852 sshd[3201]: Accepted publickey for core from 10.0.0.1 port 40644 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:41:54.156214 sshd-session[3201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:54.162991 systemd-logind[1445]: New session 8 of user core. Nov 12 22:41:54.169594 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 22:41:54.376546 sshd[3228]: Connection closed by 10.0.0.1 port 40644 Nov 12 22:41:54.376621 sshd-session[3201]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:54.381956 systemd[1]: sshd@7-10.0.0.30:22-10.0.0.1:40644.service: Deactivated successfully. Nov 12 22:41:54.384720 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 22:41:54.385436 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Nov 12 22:41:54.386465 systemd-logind[1445]: Removed session 8. Nov 12 22:41:54.551434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517172984.mount: Deactivated successfully. Nov 12 22:41:54.970607 kubelet[2639]: E1112 22:41:54.970575 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:54.974919 containerd[1472]: time="2024-11-12T22:41:54.974841372Z" level=info msg="CreateContainer within sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:41:55.602944 containerd[1472]: time="2024-11-12T22:41:55.602881210Z" level=info msg="CreateContainer within sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058\"" Nov 12 22:41:55.603653 containerd[1472]: time="2024-11-12T22:41:55.603535771Z" level=info msg="StartContainer for \"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058\"" Nov 12 22:41:55.623919 containerd[1472]: time="2024-11-12T22:41:55.623861834Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:55.627932 containerd[1472]: time="2024-11-12T22:41:55.624714517Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907197" Nov 12 22:41:55.627932 containerd[1472]: time="2024-11-12T22:41:55.625942957Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:41:55.630061 containerd[1472]: time="2024-11-12T22:41:55.630018611Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.643343898s" Nov 12 22:41:55.630147 containerd[1472]: time="2024-11-12T22:41:55.630075388Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 12 22:41:55.632222 containerd[1472]: time="2024-11-12T22:41:55.632193672Z" level=info msg="CreateContainer within sandbox \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 22:41:55.644533 systemd[1]: Started cri-containerd-4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058.scope - libcontainer container 4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058. Nov 12 22:41:55.656287 containerd[1472]: time="2024-11-12T22:41:55.656221466Z" level=info msg="CreateContainer within sandbox \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\"" Nov 12 22:41:55.658057 containerd[1472]: time="2024-11-12T22:41:55.657016891Z" level=info msg="StartContainer for \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\"" Nov 12 22:41:55.672771 systemd[1]: cri-containerd-4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058.scope: Deactivated successfully. Nov 12 22:41:55.676780 containerd[1472]: time="2024-11-12T22:41:55.676735422Z" level=info msg="StartContainer for \"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058\" returns successfully" Nov 12 22:41:55.691511 systemd[1]: Started cri-containerd-65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a.scope - libcontainer container 65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a. Nov 12 22:41:55.840824 containerd[1472]: time="2024-11-12T22:41:55.840743527Z" level=info msg="StartContainer for \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\" returns successfully" Nov 12 22:41:55.843232 containerd[1472]: time="2024-11-12T22:41:55.843159370Z" level=info msg="shim disconnected" id=4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058 namespace=k8s.io Nov 12 22:41:55.843232 containerd[1472]: time="2024-11-12T22:41:55.843211738Z" level=warning msg="cleaning up after shim disconnected" id=4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058 namespace=k8s.io Nov 12 22:41:55.843232 containerd[1472]: time="2024-11-12T22:41:55.843221777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:41:55.976498 kubelet[2639]: E1112 22:41:55.975710 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:55.980364 containerd[1472]: time="2024-11-12T22:41:55.979711304Z" level=info msg="CreateContainer within sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:41:55.982753 kubelet[2639]: E1112 22:41:55.982181 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:56.009378 containerd[1472]: time="2024-11-12T22:41:56.006395371Z" level=info msg="CreateContainer within sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\"" Nov 12 22:41:56.012711 containerd[1472]: time="2024-11-12T22:41:56.012651003Z" level=info msg="StartContainer for \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\"" Nov 12 22:41:56.023838 kubelet[2639]: I1112 22:41:56.023774 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-gjp9t" podStartSLOduration=1.2622598520000001 podStartE2EDuration="17.023727159s" podCreationTimestamp="2024-11-12 22:41:39 +0000 UTC" firstStartedPulling="2024-11-12 22:41:39.868982465 +0000 UTC m=+15.537326851" lastFinishedPulling="2024-11-12 22:41:55.630449772 +0000 UTC m=+31.298794158" observedRunningTime="2024-11-12 22:41:56.023294946 +0000 UTC m=+31.691639332" watchObservedRunningTime="2024-11-12 22:41:56.023727159 +0000 UTC m=+31.692071545" Nov 12 22:41:56.061617 systemd[1]: Started cri-containerd-24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81.scope - libcontainer container 24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81. Nov 12 22:41:56.098612 containerd[1472]: time="2024-11-12T22:41:56.098551385Z" level=info msg="StartContainer for \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\" returns successfully" Nov 12 22:41:56.248381 kubelet[2639]: I1112 22:41:56.248234 2639 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 22:41:56.456754 kubelet[2639]: I1112 22:41:56.456701 2639 topology_manager.go:215] "Topology Admit Handler" podUID="b0e79ab8-1309-4a55-a0b6-5f86b64fca19" podNamespace="kube-system" podName="coredns-76f75df574-c2p7f" Nov 12 22:41:56.457771 kubelet[2639]: I1112 22:41:56.457744 2639 topology_manager.go:215] "Topology Admit Handler" podUID="addddf3b-6b3c-42bc-929d-8771acb6e6b5" podNamespace="kube-system" podName="coredns-76f75df574-n9f6m" Nov 12 22:41:56.466710 systemd[1]: Created slice kubepods-burstable-podb0e79ab8_1309_4a55_a0b6_5f86b64fca19.slice - libcontainer container kubepods-burstable-podb0e79ab8_1309_4a55_a0b6_5f86b64fca19.slice. Nov 12 22:41:56.474843 systemd[1]: Created slice kubepods-burstable-podaddddf3b_6b3c_42bc_929d_8771acb6e6b5.slice - libcontainer container kubepods-burstable-podaddddf3b_6b3c_42bc_929d_8771acb6e6b5.slice. Nov 12 22:41:56.583738 kubelet[2639]: I1112 22:41:56.583697 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0e79ab8-1309-4a55-a0b6-5f86b64fca19-config-volume\") pod \"coredns-76f75df574-c2p7f\" (UID: \"b0e79ab8-1309-4a55-a0b6-5f86b64fca19\") " pod="kube-system/coredns-76f75df574-c2p7f" Nov 12 22:41:56.583738 kubelet[2639]: I1112 22:41:56.583751 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/addddf3b-6b3c-42bc-929d-8771acb6e6b5-config-volume\") pod \"coredns-76f75df574-n9f6m\" (UID: \"addddf3b-6b3c-42bc-929d-8771acb6e6b5\") " pod="kube-system/coredns-76f75df574-n9f6m" Nov 12 22:41:56.583938 kubelet[2639]: I1112 22:41:56.583775 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss94p\" (UniqueName: \"kubernetes.io/projected/b0e79ab8-1309-4a55-a0b6-5f86b64fca19-kube-api-access-ss94p\") pod \"coredns-76f75df574-c2p7f\" (UID: \"b0e79ab8-1309-4a55-a0b6-5f86b64fca19\") " pod="kube-system/coredns-76f75df574-c2p7f" Nov 12 22:41:56.583938 kubelet[2639]: I1112 22:41:56.583797 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mvmp\" (UniqueName: \"kubernetes.io/projected/addddf3b-6b3c-42bc-929d-8771acb6e6b5-kube-api-access-2mvmp\") pod \"coredns-76f75df574-n9f6m\" (UID: \"addddf3b-6b3c-42bc-929d-8771acb6e6b5\") " pod="kube-system/coredns-76f75df574-n9f6m" Nov 12 22:41:56.596449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058-rootfs.mount: Deactivated successfully. Nov 12 22:41:56.770732 kubelet[2639]: E1112 22:41:56.770677 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:56.778891 kubelet[2639]: E1112 22:41:56.778829 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:56.783785 containerd[1472]: time="2024-11-12T22:41:56.783730365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n9f6m,Uid:addddf3b-6b3c-42bc-929d-8771acb6e6b5,Namespace:kube-system,Attempt:0,}" Nov 12 22:41:56.783895 containerd[1472]: time="2024-11-12T22:41:56.783770661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c2p7f,Uid:b0e79ab8-1309-4a55-a0b6-5f86b64fca19,Namespace:kube-system,Attempt:0,}" Nov 12 22:41:56.988557 kubelet[2639]: E1112 22:41:56.988311 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:56.989543 kubelet[2639]: E1112 22:41:56.988666 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:57.121463 kubelet[2639]: I1112 22:41:57.121415 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rdpm9" podStartSLOduration=6.80943285 podStartE2EDuration="18.121369547s" podCreationTimestamp="2024-11-12 22:41:39 +0000 UTC" firstStartedPulling="2024-11-12 22:41:39.674501141 +0000 UTC m=+15.342845527" lastFinishedPulling="2024-11-12 22:41:50.986437838 +0000 UTC m=+26.654782224" observedRunningTime="2024-11-12 22:41:57.120061227 +0000 UTC m=+32.788405623" watchObservedRunningTime="2024-11-12 22:41:57.121369547 +0000 UTC m=+32.789713933" Nov 12 22:41:57.990700 kubelet[2639]: E1112 22:41:57.990657 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:58.632040 systemd-networkd[1403]: cilium_host: Link UP Nov 12 22:41:58.632225 systemd-networkd[1403]: cilium_net: Link UP Nov 12 22:41:58.632229 systemd-networkd[1403]: cilium_net: Gained carrier Nov 12 22:41:58.632453 systemd-networkd[1403]: cilium_host: Gained carrier Nov 12 22:41:58.634227 systemd-networkd[1403]: cilium_host: Gained IPv6LL Nov 12 22:41:58.752059 systemd-networkd[1403]: cilium_vxlan: Link UP Nov 12 22:41:58.752074 systemd-networkd[1403]: cilium_vxlan: Gained carrier Nov 12 22:41:58.983379 kernel: NET: Registered PF_ALG protocol family Nov 12 22:41:58.992615 kubelet[2639]: E1112 22:41:58.992583 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:59.391604 systemd-networkd[1403]: cilium_net: Gained IPv6LL Nov 12 22:41:59.491173 systemd[1]: Started sshd@8-10.0.0.30:22-10.0.0.1:46668.service - OpenSSH per-connection server daemon (10.0.0.1:46668). Nov 12 22:41:59.627879 sshd[3635]: Accepted publickey for core from 10.0.0.1 port 46668 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:41:59.635371 sshd-session[3635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:59.651444 systemd-logind[1445]: New session 9 of user core. Nov 12 22:41:59.660940 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 22:41:59.913404 sshd[3699]: Connection closed by 10.0.0.1 port 46668 Nov 12 22:41:59.915898 sshd-session[3635]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:59.929791 systemd[1]: sshd@8-10.0.0.30:22-10.0.0.1:46668.service: Deactivated successfully. Nov 12 22:41:59.936225 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 22:41:59.968700 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Nov 12 22:41:59.981798 systemd-logind[1445]: Removed session 9. Nov 12 22:42:00.482642 systemd-networkd[1403]: lxc_health: Link UP Nov 12 22:42:00.495456 systemd-networkd[1403]: lxc_health: Gained carrier Nov 12 22:42:00.543717 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL Nov 12 22:42:01.148039 systemd-networkd[1403]: lxcb12277ac16a3: Link UP Nov 12 22:42:01.185539 systemd-networkd[1403]: lxcec6b70781954: Link UP Nov 12 22:42:01.204043 kernel: eth0: renamed from tmpd3f36 Nov 12 22:42:01.211785 kernel: eth0: renamed from tmp1a9e6 Nov 12 22:42:01.218555 systemd-networkd[1403]: lxcec6b70781954: Gained carrier Nov 12 22:42:01.220094 systemd-networkd[1403]: lxcb12277ac16a3: Gained carrier Nov 12 22:42:01.505054 systemd-networkd[1403]: lxc_health: Gained IPv6LL Nov 12 22:42:01.579467 kubelet[2639]: E1112 22:42:01.578549 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:02.655772 systemd-networkd[1403]: lxcec6b70781954: Gained IPv6LL Nov 12 22:42:02.720285 systemd-networkd[1403]: lxcb12277ac16a3: Gained IPv6LL Nov 12 22:42:02.790601 kubelet[2639]: I1112 22:42:02.786792 2639 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:42:02.792823 kubelet[2639]: E1112 22:42:02.792556 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:03.026728 kubelet[2639]: E1112 22:42:03.026655 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:04.938706 systemd[1]: Started sshd@9-10.0.0.30:22-10.0.0.1:46680.service - OpenSSH per-connection server daemon (10.0.0.1:46680). Nov 12 22:42:05.133660 sshd[3878]: Accepted publickey for core from 10.0.0.1 port 46680 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:05.136397 sshd-session[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:05.177966 systemd-logind[1445]: New session 10 of user core. Nov 12 22:42:05.214687 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 22:42:05.284120 kernel: hrtimer: interrupt took 9063304 ns Nov 12 22:42:05.492492 sshd[3880]: Connection closed by 10.0.0.1 port 46680 Nov 12 22:42:05.493105 sshd-session[3878]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:05.501401 systemd[1]: sshd@9-10.0.0.30:22-10.0.0.1:46680.service: Deactivated successfully. Nov 12 22:42:05.504456 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 22:42:05.510966 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Nov 12 22:42:05.524477 systemd-logind[1445]: Removed session 10. Nov 12 22:42:07.776315 containerd[1472]: time="2024-11-12T22:42:07.775217170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:42:07.776315 containerd[1472]: time="2024-11-12T22:42:07.775334441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:42:07.776315 containerd[1472]: time="2024-11-12T22:42:07.775442183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:07.782678 containerd[1472]: time="2024-11-12T22:42:07.777806263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:07.829426 systemd[1]: Started cri-containerd-1a9e6c0db7ff8a27ece4daa347b240712a0ec6e082c4f78883a42c2bf03fa58f.scope - libcontainer container 1a9e6c0db7ff8a27ece4daa347b240712a0ec6e082c4f78883a42c2bf03fa58f. Nov 12 22:42:07.868935 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:42:07.944197 containerd[1472]: time="2024-11-12T22:42:07.944094770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c2p7f,Uid:b0e79ab8-1309-4a55-a0b6-5f86b64fca19,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a9e6c0db7ff8a27ece4daa347b240712a0ec6e082c4f78883a42c2bf03fa58f\"" Nov 12 22:42:07.945238 kubelet[2639]: E1112 22:42:07.945193 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:07.949713 containerd[1472]: time="2024-11-12T22:42:07.949517154Z" level=info msg="CreateContainer within sandbox \"1a9e6c0db7ff8a27ece4daa347b240712a0ec6e082c4f78883a42c2bf03fa58f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:42:07.955291 containerd[1472]: time="2024-11-12T22:42:07.955091263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:42:07.955693 containerd[1472]: time="2024-11-12T22:42:07.955220385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:42:07.955693 containerd[1472]: time="2024-11-12T22:42:07.955246084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:07.955693 containerd[1472]: time="2024-11-12T22:42:07.955426633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:08.022680 systemd[1]: Started cri-containerd-d3f3688ffcb2e76ff71e2db5e81a8a40bc4550dde636159c18c8616032cd2d15.scope - libcontainer container d3f3688ffcb2e76ff71e2db5e81a8a40bc4550dde636159c18c8616032cd2d15. Nov 12 22:42:08.047495 containerd[1472]: time="2024-11-12T22:42:08.047253518Z" level=info msg="CreateContainer within sandbox \"1a9e6c0db7ff8a27ece4daa347b240712a0ec6e082c4f78883a42c2bf03fa58f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ada071d2408b02d12e5c32c35fa4c2d616918334e5536f05466fc5a7c4f62b0\"" Nov 12 22:42:08.048722 containerd[1472]: time="2024-11-12T22:42:08.048666010Z" level=info msg="StartContainer for \"4ada071d2408b02d12e5c32c35fa4c2d616918334e5536f05466fc5a7c4f62b0\"" Nov 12 22:42:08.076464 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:42:08.128915 systemd[1]: Started cri-containerd-4ada071d2408b02d12e5c32c35fa4c2d616918334e5536f05466fc5a7c4f62b0.scope - libcontainer container 4ada071d2408b02d12e5c32c35fa4c2d616918334e5536f05466fc5a7c4f62b0. Nov 12 22:42:08.151887 containerd[1472]: time="2024-11-12T22:42:08.149546422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n9f6m,Uid:addddf3b-6b3c-42bc-929d-8771acb6e6b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3f3688ffcb2e76ff71e2db5e81a8a40bc4550dde636159c18c8616032cd2d15\"" Nov 12 22:42:08.164414 kubelet[2639]: E1112 22:42:08.164312 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:08.174257 containerd[1472]: time="2024-11-12T22:42:08.174193989Z" level=info msg="CreateContainer within sandbox \"d3f3688ffcb2e76ff71e2db5e81a8a40bc4550dde636159c18c8616032cd2d15\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:42:08.235574 containerd[1472]: time="2024-11-12T22:42:08.232473387Z" level=info msg="StartContainer for \"4ada071d2408b02d12e5c32c35fa4c2d616918334e5536f05466fc5a7c4f62b0\" returns successfully" Nov 12 22:42:08.262383 containerd[1472]: time="2024-11-12T22:42:08.262271306Z" level=info msg="CreateContainer within sandbox \"d3f3688ffcb2e76ff71e2db5e81a8a40bc4550dde636159c18c8616032cd2d15\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a235c50ee5af2b16467b1fdeb556f119d2279a0f039d46c69933dd1af11f4fa4\"" Nov 12 22:42:08.263641 containerd[1472]: time="2024-11-12T22:42:08.263550729Z" level=info msg="StartContainer for \"a235c50ee5af2b16467b1fdeb556f119d2279a0f039d46c69933dd1af11f4fa4\"" Nov 12 22:42:08.350790 systemd[1]: Started cri-containerd-a235c50ee5af2b16467b1fdeb556f119d2279a0f039d46c69933dd1af11f4fa4.scope - libcontainer container a235c50ee5af2b16467b1fdeb556f119d2279a0f039d46c69933dd1af11f4fa4. Nov 12 22:42:08.480567 containerd[1472]: time="2024-11-12T22:42:08.476592577Z" level=info msg="StartContainer for \"a235c50ee5af2b16467b1fdeb556f119d2279a0f039d46c69933dd1af11f4fa4\" returns successfully" Nov 12 22:42:09.078255 kubelet[2639]: E1112 22:42:09.078049 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:09.093739 kubelet[2639]: E1112 22:42:09.093673 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:09.109895 kubelet[2639]: I1112 22:42:09.106240 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-c2p7f" podStartSLOduration=30.106185127 podStartE2EDuration="30.106185127s" podCreationTimestamp="2024-11-12 22:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:42:09.101131737 +0000 UTC m=+44.769476133" watchObservedRunningTime="2024-11-12 22:42:09.106185127 +0000 UTC m=+44.774529513" Nov 12 22:42:10.096410 kubelet[2639]: E1112 22:42:10.095995 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:10.535767 systemd[1]: Started sshd@10-10.0.0.30:22-10.0.0.1:52606.service - OpenSSH per-connection server daemon (10.0.0.1:52606). Nov 12 22:42:10.606881 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 52606 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:10.610058 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:10.622525 systemd-logind[1445]: New session 11 of user core. Nov 12 22:42:10.633784 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 22:42:10.849781 sshd[4066]: Connection closed by 10.0.0.1 port 52606 Nov 12 22:42:10.850629 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:10.887889 systemd[1]: sshd@10-10.0.0.30:22-10.0.0.1:52606.service: Deactivated successfully. Nov 12 22:42:10.893205 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 22:42:10.900118 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Nov 12 22:42:10.907044 systemd-logind[1445]: Removed session 11. Nov 12 22:42:11.095969 kubelet[2639]: E1112 22:42:11.095923 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:15.876066 systemd[1]: Started sshd@11-10.0.0.30:22-10.0.0.1:52620.service - OpenSSH per-connection server daemon (10.0.0.1:52620). Nov 12 22:42:15.976428 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 52620 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:15.988393 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:16.013046 systemd-logind[1445]: New session 12 of user core. Nov 12 22:42:16.031775 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 22:42:16.295679 sshd[4086]: Connection closed by 10.0.0.1 port 52620 Nov 12 22:42:16.290774 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:16.301884 systemd[1]: sshd@11-10.0.0.30:22-10.0.0.1:52620.service: Deactivated successfully. Nov 12 22:42:16.313726 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 22:42:16.319761 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Nov 12 22:42:16.334901 systemd-logind[1445]: Removed session 12. Nov 12 22:42:16.781331 kubelet[2639]: E1112 22:42:16.780795 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:16.826297 kubelet[2639]: I1112 22:42:16.821445 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-n9f6m" podStartSLOduration=37.821394461 podStartE2EDuration="37.821394461s" podCreationTimestamp="2024-11-12 22:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:42:09.161615458 +0000 UTC m=+44.829959844" watchObservedRunningTime="2024-11-12 22:42:16.821394461 +0000 UTC m=+52.489738857" Nov 12 22:42:17.117834 kubelet[2639]: E1112 22:42:17.116437 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:21.359580 systemd[1]: Started sshd@12-10.0.0.30:22-10.0.0.1:43056.service - OpenSSH per-connection server daemon (10.0.0.1:43056). Nov 12 22:42:21.437085 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 43056 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:21.439563 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:21.485662 systemd-logind[1445]: New session 13 of user core. Nov 12 22:42:21.494865 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 22:42:21.722176 sshd[4105]: Connection closed by 10.0.0.1 port 43056 Nov 12 22:42:21.723616 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:21.748699 systemd[1]: sshd@12-10.0.0.30:22-10.0.0.1:43056.service: Deactivated successfully. Nov 12 22:42:21.763146 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 22:42:21.791122 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Nov 12 22:42:21.799407 systemd-logind[1445]: Removed session 13. Nov 12 22:42:26.761866 systemd[1]: Started sshd@13-10.0.0.30:22-10.0.0.1:43064.service - OpenSSH per-connection server daemon (10.0.0.1:43064). Nov 12 22:42:26.861753 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 43064 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:26.865259 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:26.883694 systemd-logind[1445]: New session 14 of user core. Nov 12 22:42:26.893748 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 22:42:27.170057 sshd[4122]: Connection closed by 10.0.0.1 port 43064 Nov 12 22:42:27.171108 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:27.190184 systemd[1]: sshd@13-10.0.0.30:22-10.0.0.1:43064.service: Deactivated successfully. Nov 12 22:42:27.198936 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 22:42:27.209490 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Nov 12 22:42:27.228869 systemd[1]: Started sshd@14-10.0.0.30:22-10.0.0.1:34962.service - OpenSSH per-connection server daemon (10.0.0.1:34962). Nov 12 22:42:27.238982 systemd-logind[1445]: Removed session 14. Nov 12 22:42:27.321307 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 34962 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:27.324686 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:27.346661 systemd-logind[1445]: New session 15 of user core. Nov 12 22:42:27.353759 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 22:42:27.739321 sshd[4138]: Connection closed by 10.0.0.1 port 34962 Nov 12 22:42:27.739898 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:27.766899 systemd[1]: sshd@14-10.0.0.30:22-10.0.0.1:34962.service: Deactivated successfully. Nov 12 22:42:27.773748 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 22:42:27.782953 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Nov 12 22:42:27.810166 systemd[1]: Started sshd@15-10.0.0.30:22-10.0.0.1:34966.service - OpenSSH per-connection server daemon (10.0.0.1:34966). Nov 12 22:42:27.812240 systemd-logind[1445]: Removed session 15. Nov 12 22:42:27.931380 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 34966 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:27.932695 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:27.955084 systemd-logind[1445]: New session 16 of user core. Nov 12 22:42:27.972383 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 22:42:28.262594 sshd[4150]: Connection closed by 10.0.0.1 port 34966 Nov 12 22:42:28.268076 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:28.273716 systemd[1]: sshd@15-10.0.0.30:22-10.0.0.1:34966.service: Deactivated successfully. Nov 12 22:42:28.285249 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 22:42:28.292407 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Nov 12 22:42:28.294815 systemd-logind[1445]: Removed session 16. Nov 12 22:42:33.308015 systemd[1]: Started sshd@16-10.0.0.30:22-10.0.0.1:34970.service - OpenSSH per-connection server daemon (10.0.0.1:34970). Nov 12 22:42:33.396584 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 34970 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:33.403587 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:33.431666 systemd-logind[1445]: New session 17 of user core. Nov 12 22:42:33.459851 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 22:42:33.755210 sshd[4164]: Connection closed by 10.0.0.1 port 34970 Nov 12 22:42:33.756016 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:33.769969 systemd[1]: sshd@16-10.0.0.30:22-10.0.0.1:34970.service: Deactivated successfully. Nov 12 22:42:33.786568 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 22:42:33.800658 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Nov 12 22:42:33.802656 systemd-logind[1445]: Removed session 17. Nov 12 22:42:34.722874 kubelet[2639]: E1112 22:42:34.721409 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:38.792096 systemd[1]: Started sshd@17-10.0.0.30:22-10.0.0.1:55730.service - OpenSSH per-connection server daemon (10.0.0.1:55730). Nov 12 22:42:38.868050 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 55730 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:38.873924 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:38.904212 systemd-logind[1445]: New session 18 of user core. Nov 12 22:42:38.910042 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 22:42:39.140536 sshd[4180]: Connection closed by 10.0.0.1 port 55730 Nov 12 22:42:39.140470 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:39.149751 systemd[1]: sshd@17-10.0.0.30:22-10.0.0.1:55730.service: Deactivated successfully. Nov 12 22:42:39.156067 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 22:42:39.169815 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Nov 12 22:42:39.177659 systemd-logind[1445]: Removed session 18. Nov 12 22:42:44.207112 systemd[1]: Started sshd@18-10.0.0.30:22-10.0.0.1:55746.service - OpenSSH per-connection server daemon (10.0.0.1:55746). Nov 12 22:42:44.277538 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 55746 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:44.280923 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:44.298236 systemd-logind[1445]: New session 19 of user core. Nov 12 22:42:44.303772 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 22:42:44.548439 sshd[4196]: Connection closed by 10.0.0.1 port 55746 Nov 12 22:42:44.548882 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:44.572157 systemd[1]: sshd@18-10.0.0.30:22-10.0.0.1:55746.service: Deactivated successfully. Nov 12 22:42:44.584455 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 22:42:44.588182 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Nov 12 22:42:44.593293 systemd-logind[1445]: Removed session 19. Nov 12 22:42:49.563874 systemd[1]: Started sshd@19-10.0.0.30:22-10.0.0.1:32976.service - OpenSSH per-connection server daemon (10.0.0.1:32976). Nov 12 22:42:49.610862 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 32976 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:49.613509 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:49.619263 systemd-logind[1445]: New session 20 of user core. Nov 12 22:42:49.635857 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 22:42:49.809287 sshd[4210]: Connection closed by 10.0.0.1 port 32976 Nov 12 22:42:49.809881 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:49.816467 systemd[1]: sshd@19-10.0.0.30:22-10.0.0.1:32976.service: Deactivated successfully. Nov 12 22:42:49.820179 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 22:42:49.821211 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Nov 12 22:42:49.822883 systemd-logind[1445]: Removed session 20. Nov 12 22:42:54.718891 kubelet[2639]: E1112 22:42:54.718832 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:54.822019 systemd[1]: Started sshd@20-10.0.0.30:22-10.0.0.1:32988.service - OpenSSH per-connection server daemon (10.0.0.1:32988). Nov 12 22:42:54.863210 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 32988 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:54.865076 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:54.870044 systemd-logind[1445]: New session 21 of user core. Nov 12 22:42:54.882539 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 22:42:55.000990 sshd[4225]: Connection closed by 10.0.0.1 port 32988 Nov 12 22:42:55.002754 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:55.010958 systemd[1]: sshd@20-10.0.0.30:22-10.0.0.1:32988.service: Deactivated successfully. Nov 12 22:42:55.013557 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 22:42:55.015678 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Nov 12 22:42:55.022847 systemd[1]: Started sshd@21-10.0.0.30:22-10.0.0.1:33000.service - OpenSSH per-connection server daemon (10.0.0.1:33000). Nov 12 22:42:55.024143 systemd-logind[1445]: Removed session 21. Nov 12 22:42:55.063002 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 33000 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:55.065036 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:55.070413 systemd-logind[1445]: New session 22 of user core. Nov 12 22:42:55.078623 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 22:42:55.451926 sshd[4239]: Connection closed by 10.0.0.1 port 33000 Nov 12 22:42:55.452555 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:55.467519 systemd[1]: sshd@21-10.0.0.30:22-10.0.0.1:33000.service: Deactivated successfully. Nov 12 22:42:55.470773 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 22:42:55.473057 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Nov 12 22:42:55.478794 systemd[1]: Started sshd@22-10.0.0.30:22-10.0.0.1:33012.service - OpenSSH per-connection server daemon (10.0.0.1:33012). Nov 12 22:42:55.480137 systemd-logind[1445]: Removed session 22. Nov 12 22:42:55.522401 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 33012 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:55.524415 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:55.529645 systemd-logind[1445]: New session 23 of user core. Nov 12 22:42:55.536491 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 22:42:56.990209 sshd[4251]: Connection closed by 10.0.0.1 port 33012 Nov 12 22:42:56.991728 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:57.006538 systemd[1]: sshd@22-10.0.0.30:22-10.0.0.1:33012.service: Deactivated successfully. Nov 12 22:42:57.009647 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 22:42:57.010641 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Nov 12 22:42:57.023009 systemd[1]: Started sshd@23-10.0.0.30:22-10.0.0.1:39520.service - OpenSSH per-connection server daemon (10.0.0.1:39520). Nov 12 22:42:57.028580 systemd-logind[1445]: Removed session 23. Nov 12 22:42:57.076374 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 39520 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:57.078963 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:57.083926 systemd-logind[1445]: New session 24 of user core. Nov 12 22:42:57.093706 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 22:42:57.348693 sshd[4270]: Connection closed by 10.0.0.1 port 39520 Nov 12 22:42:57.349664 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:57.359963 systemd[1]: sshd@23-10.0.0.30:22-10.0.0.1:39520.service: Deactivated successfully. Nov 12 22:42:57.362265 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 22:42:57.363793 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Nov 12 22:42:57.371670 systemd[1]: Started sshd@24-10.0.0.30:22-10.0.0.1:39534.service - OpenSSH per-connection server daemon (10.0.0.1:39534). Nov 12 22:42:57.372722 systemd-logind[1445]: Removed session 24. Nov 12 22:42:57.412222 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 39534 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:42:57.414316 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:57.418797 systemd-logind[1445]: New session 25 of user core. Nov 12 22:42:57.429463 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 22:42:57.593909 sshd[4282]: Connection closed by 10.0.0.1 port 39534 Nov 12 22:42:57.594328 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:57.599430 systemd[1]: sshd@24-10.0.0.30:22-10.0.0.1:39534.service: Deactivated successfully. Nov 12 22:42:57.602638 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 22:42:57.603576 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Nov 12 22:42:57.604807 systemd-logind[1445]: Removed session 25. Nov 12 22:43:02.607530 systemd[1]: Started sshd@25-10.0.0.30:22-10.0.0.1:39536.service - OpenSSH per-connection server daemon (10.0.0.1:39536). Nov 12 22:43:02.648088 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 39536 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:43:02.649638 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:02.653749 systemd-logind[1445]: New session 26 of user core. Nov 12 22:43:02.660530 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 22:43:02.776445 sshd[4296]: Connection closed by 10.0.0.1 port 39536 Nov 12 22:43:02.776874 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:02.781188 systemd[1]: sshd@25-10.0.0.30:22-10.0.0.1:39536.service: Deactivated successfully. Nov 12 22:43:02.783270 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 22:43:02.784029 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Nov 12 22:43:02.785120 systemd-logind[1445]: Removed session 26. Nov 12 22:43:04.718801 kubelet[2639]: E1112 22:43:04.718758 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:07.719024 kubelet[2639]: E1112 22:43:07.718978 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:07.789235 systemd[1]: Started sshd@26-10.0.0.30:22-10.0.0.1:43328.service - OpenSSH per-connection server daemon (10.0.0.1:43328). Nov 12 22:43:07.830848 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 43328 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:43:07.832457 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:07.836620 systemd-logind[1445]: New session 27 of user core. Nov 12 22:43:07.845479 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 22:43:07.957963 sshd[4313]: Connection closed by 10.0.0.1 port 43328 Nov 12 22:43:07.958425 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:07.962937 systemd[1]: sshd@26-10.0.0.30:22-10.0.0.1:43328.service: Deactivated successfully. Nov 12 22:43:07.965351 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 22:43:07.965955 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Nov 12 22:43:07.966926 systemd-logind[1445]: Removed session 27. Nov 12 22:43:10.719504 kubelet[2639]: E1112 22:43:10.719455 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:12.970688 systemd[1]: Started sshd@27-10.0.0.30:22-10.0.0.1:43334.service - OpenSSH per-connection server daemon (10.0.0.1:43334). Nov 12 22:43:13.011910 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 43334 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:43:13.013583 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:13.018050 systemd-logind[1445]: New session 28 of user core. Nov 12 22:43:13.026615 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 22:43:13.142507 sshd[4329]: Connection closed by 10.0.0.1 port 43334 Nov 12 22:43:13.142937 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:13.148734 systemd[1]: sshd@27-10.0.0.30:22-10.0.0.1:43334.service: Deactivated successfully. Nov 12 22:43:13.151701 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 22:43:13.152493 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Nov 12 22:43:13.153530 systemd-logind[1445]: Removed session 28. Nov 12 22:43:15.719365 kubelet[2639]: E1112 22:43:15.719305 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:18.163599 systemd[1]: Started sshd@28-10.0.0.30:22-10.0.0.1:35856.service - OpenSSH per-connection server daemon (10.0.0.1:35856). Nov 12 22:43:18.205579 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 35856 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:43:18.207361 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:18.211716 systemd-logind[1445]: New session 29 of user core. Nov 12 22:43:18.223483 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 22:43:18.336497 sshd[4343]: Connection closed by 10.0.0.1 port 35856 Nov 12 22:43:18.336977 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:18.349174 systemd[1]: sshd@28-10.0.0.30:22-10.0.0.1:35856.service: Deactivated successfully. Nov 12 22:43:18.353227 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 22:43:18.356902 systemd-logind[1445]: Session 29 logged out. Waiting for processes to exit. Nov 12 22:43:18.362627 systemd[1]: Started sshd@29-10.0.0.30:22-10.0.0.1:35858.service - OpenSSH per-connection server daemon (10.0.0.1:35858). Nov 12 22:43:18.363753 systemd-logind[1445]: Removed session 29. Nov 12 22:43:18.400165 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 35858 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:43:18.401768 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:18.406801 systemd-logind[1445]: New session 30 of user core. Nov 12 22:43:18.420596 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 12 22:43:19.776587 containerd[1472]: time="2024-11-12T22:43:19.776424083Z" level=info msg="StopContainer for \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\" with timeout 30 (s)" Nov 12 22:43:19.794359 containerd[1472]: time="2024-11-12T22:43:19.794265640Z" level=info msg="Stop container \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\" with signal terminated" Nov 12 22:43:19.807560 systemd[1]: cri-containerd-65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a.scope: Deactivated successfully. Nov 12 22:43:19.828963 containerd[1472]: time="2024-11-12T22:43:19.828899670Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:43:19.831720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a-rootfs.mount: Deactivated successfully. Nov 12 22:43:19.834413 containerd[1472]: time="2024-11-12T22:43:19.834373142Z" level=info msg="StopContainer for \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\" with timeout 2 (s)" Nov 12 22:43:19.834699 containerd[1472]: time="2024-11-12T22:43:19.834667500Z" level=info msg="Stop container \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\" with signal terminated" Nov 12 22:43:19.843016 systemd-networkd[1403]: lxc_health: Link DOWN Nov 12 22:43:19.843026 systemd-networkd[1403]: lxc_health: Lost carrier Nov 12 22:43:19.843510 containerd[1472]: time="2024-11-12T22:43:19.843201205Z" level=info msg="shim disconnected" id=65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a namespace=k8s.io Nov 12 22:43:19.843510 containerd[1472]: time="2024-11-12T22:43:19.843246832Z" level=warning msg="cleaning up after shim disconnected" id=65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a namespace=k8s.io Nov 12 22:43:19.843510 containerd[1472]: time="2024-11-12T22:43:19.843255558Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:19.863002 containerd[1472]: time="2024-11-12T22:43:19.862952548Z" level=info msg="StopContainer for \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\" returns successfully" Nov 12 22:43:19.864258 systemd[1]: cri-containerd-24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81.scope: Deactivated successfully. Nov 12 22:43:19.864583 systemd[1]: cri-containerd-24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81.scope: Consumed 10.956s CPU time. Nov 12 22:43:19.867737 containerd[1472]: time="2024-11-12T22:43:19.867696350Z" level=info msg="StopPodSandbox for \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\"" Nov 12 22:43:19.883957 containerd[1472]: time="2024-11-12T22:43:19.867748008Z" level=info msg="Container to stop \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:43:19.886089 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4-shm.mount: Deactivated successfully. Nov 12 22:43:19.892183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81-rootfs.mount: Deactivated successfully. Nov 12 22:43:19.893292 systemd[1]: cri-containerd-12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4.scope: Deactivated successfully. Nov 12 22:43:19.904975 containerd[1472]: time="2024-11-12T22:43:19.904882861Z" level=info msg="shim disconnected" id=24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81 namespace=k8s.io Nov 12 22:43:19.904975 containerd[1472]: time="2024-11-12T22:43:19.904959747Z" level=warning msg="cleaning up after shim disconnected" id=24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81 namespace=k8s.io Nov 12 22:43:19.904975 containerd[1472]: time="2024-11-12T22:43:19.904971119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:19.917196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4-rootfs.mount: Deactivated successfully. Nov 12 22:43:19.919489 containerd[1472]: time="2024-11-12T22:43:19.919301810Z" level=info msg="shim disconnected" id=12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4 namespace=k8s.io Nov 12 22:43:19.919489 containerd[1472]: time="2024-11-12T22:43:19.919386680Z" level=warning msg="cleaning up after shim disconnected" id=12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4 namespace=k8s.io Nov 12 22:43:19.919489 containerd[1472]: time="2024-11-12T22:43:19.919398101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:19.924075 containerd[1472]: time="2024-11-12T22:43:19.924016314Z" level=info msg="StopContainer for \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\" returns successfully" Nov 12 22:43:19.924648 containerd[1472]: time="2024-11-12T22:43:19.924609348Z" level=info msg="StopPodSandbox for \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\"" Nov 12 22:43:19.924712 containerd[1472]: time="2024-11-12T22:43:19.924670253Z" level=info msg="Container to stop \"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:43:19.924752 containerd[1472]: time="2024-11-12T22:43:19.924712683Z" level=info msg="Container to stop \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:43:19.924752 containerd[1472]: time="2024-11-12T22:43:19.924724847Z" level=info msg="Container to stop \"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:43:19.924752 containerd[1472]: time="2024-11-12T22:43:19.924738142Z" level=info msg="Container to stop \"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:43:19.924752 containerd[1472]: time="2024-11-12T22:43:19.924749423Z" level=info msg="Container to stop \"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:43:19.932783 systemd[1]: cri-containerd-b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1.scope: Deactivated successfully. Nov 12 22:43:19.937187 containerd[1472]: time="2024-11-12T22:43:19.937132427Z" level=info msg="TearDown network for sandbox \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\" successfully" Nov 12 22:43:19.937187 containerd[1472]: time="2024-11-12T22:43:19.937168464Z" level=info msg="StopPodSandbox for \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\" returns successfully" Nov 12 22:43:19.964388 containerd[1472]: time="2024-11-12T22:43:19.964295762Z" level=info msg="shim disconnected" id=b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1 namespace=k8s.io Nov 12 22:43:19.964388 containerd[1472]: time="2024-11-12T22:43:19.964373920Z" level=warning msg="cleaning up after shim disconnected" id=b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1 namespace=k8s.io Nov 12 22:43:19.964388 containerd[1472]: time="2024-11-12T22:43:19.964385942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:19.981909 containerd[1472]: time="2024-11-12T22:43:19.981838795Z" level=info msg="TearDown network for sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" successfully" Nov 12 22:43:19.981909 containerd[1472]: time="2024-11-12T22:43:19.981885944Z" level=info msg="StopPodSandbox for \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" returns successfully" Nov 12 22:43:19.983627 kubelet[2639]: I1112 22:43:19.983579 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vd6k\" (UniqueName: \"kubernetes.io/projected/03a0bdc6-fc2e-484e-8071-df99015f9f3a-kube-api-access-4vd6k\") pod \"03a0bdc6-fc2e-484e-8071-df99015f9f3a\" (UID: \"03a0bdc6-fc2e-484e-8071-df99015f9f3a\") " Nov 12 22:43:19.983627 kubelet[2639]: I1112 22:43:19.983630 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03a0bdc6-fc2e-484e-8071-df99015f9f3a-cilium-config-path\") pod \"03a0bdc6-fc2e-484e-8071-df99015f9f3a\" (UID: \"03a0bdc6-fc2e-484e-8071-df99015f9f3a\") " Nov 12 22:43:19.987533 kubelet[2639]: I1112 22:43:19.987081 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a0bdc6-fc2e-484e-8071-df99015f9f3a-kube-api-access-4vd6k" (OuterVolumeSpecName: "kube-api-access-4vd6k") pod "03a0bdc6-fc2e-484e-8071-df99015f9f3a" (UID: "03a0bdc6-fc2e-484e-8071-df99015f9f3a"). InnerVolumeSpecName "kube-api-access-4vd6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:43:19.988108 kubelet[2639]: I1112 22:43:19.988039 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a0bdc6-fc2e-484e-8071-df99015f9f3a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "03a0bdc6-fc2e-484e-8071-df99015f9f3a" (UID: "03a0bdc6-fc2e-484e-8071-df99015f9f3a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:43:20.084118 kubelet[2639]: I1112 22:43:20.083926 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17e10f0a-edfa-4789-8017-df44930f0e11-hubble-tls\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084118 kubelet[2639]: I1112 22:43:20.083967 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-etc-cni-netd\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084118 kubelet[2639]: I1112 22:43:20.083983 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-lib-modules\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084118 kubelet[2639]: I1112 22:43:20.084001 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-host-proc-sys-kernel\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084118 kubelet[2639]: I1112 22:43:20.084021 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-bpf-maps\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084118 kubelet[2639]: I1112 22:43:20.084041 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-host-proc-sys-net\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084497 kubelet[2639]: I1112 22:43:20.084079 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-cgroup\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084497 kubelet[2639]: I1112 22:43:20.084076 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:43:20.084497 kubelet[2639]: I1112 22:43:20.084106 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxw96\" (UniqueName: \"kubernetes.io/projected/17e10f0a-edfa-4789-8017-df44930f0e11-kube-api-access-sxw96\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084497 kubelet[2639]: I1112 22:43:20.084166 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-xtables-lock\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084497 kubelet[2639]: I1112 22:43:20.084188 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-run\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084497 kubelet[2639]: I1112 22:43:20.084217 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17e10f0a-edfa-4789-8017-df44930f0e11-clustermesh-secrets\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084730 kubelet[2639]: I1112 22:43:20.084235 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cni-path\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084730 kubelet[2639]: I1112 22:43:20.084254 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-config-path\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084730 kubelet[2639]: I1112 22:43:20.084271 2639 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-hostproc\") pod \"17e10f0a-edfa-4789-8017-df44930f0e11\" (UID: \"17e10f0a-edfa-4789-8017-df44930f0e11\") " Nov 12 22:43:20.084730 kubelet[2639]: I1112 22:43:20.084315 2639 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03a0bdc6-fc2e-484e-8071-df99015f9f3a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.084730 kubelet[2639]: I1112 22:43:20.084326 2639 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.084730 kubelet[2639]: I1112 22:43:20.084337 2639 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4vd6k\" (UniqueName: \"kubernetes.io/projected/03a0bdc6-fc2e-484e-8071-df99015f9f3a-kube-api-access-4vd6k\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.084730 kubelet[2639]: I1112 22:43:20.084370 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-hostproc" (OuterVolumeSpecName: "hostproc") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:43:20.084984 kubelet[2639]: I1112 22:43:20.084908 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:43:20.084984 kubelet[2639]: I1112 22:43:20.084934 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:43:20.084984 kubelet[2639]: I1112 22:43:20.084952 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:43:20.084984 kubelet[2639]: I1112 22:43:20.084970 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:43:20.084984 kubelet[2639]: I1112 22:43:20.084987 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:43:20.085279 kubelet[2639]: I1112 22:43:20.085246 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cni-path" (OuterVolumeSpecName: "cni-path") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:43:20.087637 kubelet[2639]: I1112 22:43:20.087602 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e10f0a-edfa-4789-8017-df44930f0e11-kube-api-access-sxw96" (OuterVolumeSpecName: "kube-api-access-sxw96") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "kube-api-access-sxw96". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:43:20.087902 kubelet[2639]: I1112 22:43:20.087859 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:43:20.087902 kubelet[2639]: I1112 22:43:20.087884 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:43:20.088764 kubelet[2639]: I1112 22:43:20.088730 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:43:20.088999 kubelet[2639]: I1112 22:43:20.088979 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e10f0a-edfa-4789-8017-df44930f0e11-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:43:20.089564 kubelet[2639]: I1112 22:43:20.089527 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e10f0a-edfa-4789-8017-df44930f0e11-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "17e10f0a-edfa-4789-8017-df44930f0e11" (UID: "17e10f0a-edfa-4789-8017-df44930f0e11"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 22:43:20.185279 kubelet[2639]: I1112 22:43:20.185209 2639 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185279 kubelet[2639]: I1112 22:43:20.185264 2639 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185279 kubelet[2639]: I1112 22:43:20.185280 2639 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185279 kubelet[2639]: I1112 22:43:20.185297 2639 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185279 kubelet[2639]: I1112 22:43:20.185310 2639 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17e10f0a-edfa-4789-8017-df44930f0e11-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185598 kubelet[2639]: I1112 22:43:20.185330 2639 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185598 kubelet[2639]: I1112 22:43:20.185373 2639 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sxw96\" (UniqueName: \"kubernetes.io/projected/17e10f0a-edfa-4789-8017-df44930f0e11-kube-api-access-sxw96\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185598 kubelet[2639]: I1112 22:43:20.185389 2639 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185598 kubelet[2639]: I1112 22:43:20.185402 2639 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185598 kubelet[2639]: I1112 22:43:20.185415 2639 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17e10f0a-edfa-4789-8017-df44930f0e11-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185598 kubelet[2639]: I1112 22:43:20.185430 2639 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185598 kubelet[2639]: I1112 22:43:20.185443 2639 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17e10f0a-edfa-4789-8017-df44930f0e11-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.185598 kubelet[2639]: I1112 22:43:20.185455 2639 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17e10f0a-edfa-4789-8017-df44930f0e11-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 22:43:20.354812 kubelet[2639]: I1112 22:43:20.354522 2639 scope.go:117] "RemoveContainer" containerID="24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81" Nov 12 22:43:20.361696 systemd[1]: Removed slice kubepods-burstable-pod17e10f0a_edfa_4789_8017_df44930f0e11.slice - libcontainer container kubepods-burstable-pod17e10f0a_edfa_4789_8017_df44930f0e11.slice. Nov 12 22:43:20.362045 systemd[1]: kubepods-burstable-pod17e10f0a_edfa_4789_8017_df44930f0e11.slice: Consumed 11.074s CPU time. Nov 12 22:43:20.363372 containerd[1472]: time="2024-11-12T22:43:20.363307150Z" level=info msg="RemoveContainer for \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\"" Nov 12 22:43:20.365315 systemd[1]: Removed slice kubepods-besteffort-pod03a0bdc6_fc2e_484e_8071_df99015f9f3a.slice - libcontainer container kubepods-besteffort-pod03a0bdc6_fc2e_484e_8071_df99015f9f3a.slice. Nov 12 22:43:20.370765 containerd[1472]: time="2024-11-12T22:43:20.370735361Z" level=info msg="RemoveContainer for \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\" returns successfully" Nov 12 22:43:20.371109 kubelet[2639]: I1112 22:43:20.371053 2639 scope.go:117] "RemoveContainer" containerID="4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058" Nov 12 22:43:20.372871 containerd[1472]: time="2024-11-12T22:43:20.372838391Z" level=info msg="RemoveContainer for \"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058\"" Nov 12 22:43:20.377116 containerd[1472]: time="2024-11-12T22:43:20.377072888Z" level=info msg="RemoveContainer for \"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058\" returns successfully" Nov 12 22:43:20.377623 kubelet[2639]: I1112 22:43:20.377225 2639 scope.go:117] "RemoveContainer" containerID="4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb" Nov 12 22:43:20.378222 containerd[1472]: time="2024-11-12T22:43:20.378179211Z" level=info msg="RemoveContainer for \"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb\"" Nov 12 22:43:20.382352 containerd[1472]: time="2024-11-12T22:43:20.382227224Z" level=info msg="RemoveContainer for \"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb\" returns successfully" Nov 12 22:43:20.382621 kubelet[2639]: I1112 22:43:20.382571 2639 scope.go:117] "RemoveContainer" containerID="f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8" Nov 12 22:43:20.383750 containerd[1472]: time="2024-11-12T22:43:20.383720871Z" level=info msg="RemoveContainer for \"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8\"" Nov 12 22:43:20.387007 containerd[1472]: time="2024-11-12T22:43:20.386970752Z" level=info msg="RemoveContainer for \"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8\" returns successfully" Nov 12 22:43:20.387149 kubelet[2639]: I1112 22:43:20.387122 2639 scope.go:117] "RemoveContainer" containerID="5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06" Nov 12 22:43:20.388298 containerd[1472]: time="2024-11-12T22:43:20.388016883Z" level=info msg="RemoveContainer for \"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06\"" Nov 12 22:43:20.391176 containerd[1472]: time="2024-11-12T22:43:20.391140306Z" level=info msg="RemoveContainer for \"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06\" returns successfully" Nov 12 22:43:20.391291 kubelet[2639]: I1112 22:43:20.391274 2639 scope.go:117] "RemoveContainer" containerID="24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81" Nov 12 22:43:20.391477 containerd[1472]: time="2024-11-12T22:43:20.391437228Z" level=error msg="ContainerStatus for \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\": not found" Nov 12 22:43:20.399051 kubelet[2639]: E1112 22:43:20.399006 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\": not found" containerID="24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81" Nov 12 22:43:20.399144 kubelet[2639]: I1112 22:43:20.399123 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81"} err="failed to get container status \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\": rpc error: code = NotFound desc = an error occurred when try to find container \"24bea95404afafd17351ad7bbf22c6dca8f8e1865e0a0049b2f5f0842988ca81\": not found" Nov 12 22:43:20.399144 kubelet[2639]: I1112 22:43:20.399138 2639 scope.go:117] "RemoveContainer" containerID="4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058" Nov 12 22:43:20.399331 containerd[1472]: time="2024-11-12T22:43:20.399282979Z" level=error msg="ContainerStatus for \"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058\": not found" Nov 12 22:43:20.402378 kubelet[2639]: E1112 22:43:20.399613 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058\": not found" containerID="4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058" Nov 12 22:43:20.402378 kubelet[2639]: I1112 22:43:20.399789 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058"} err="failed to get container status \"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058\": rpc error: code = NotFound desc = an error occurred when try to find container \"4efdb11081d674499b1b077246059b3c4666bf63d73f77ed0828a5e8fe01c058\": not found" Nov 12 22:43:20.402378 kubelet[2639]: I1112 22:43:20.399813 2639 scope.go:117] "RemoveContainer" containerID="4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb" Nov 12 22:43:20.402489 containerd[1472]: time="2024-11-12T22:43:20.402458750Z" level=error msg="ContainerStatus for \"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb\": not found" Nov 12 22:43:20.403100 kubelet[2639]: E1112 22:43:20.403064 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb\": not found" containerID="4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb" Nov 12 22:43:20.403143 kubelet[2639]: I1112 22:43:20.403105 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb"} err="failed to get container status \"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f032c88db9d0898f5921f9038e66b63a4cfe93228579a73b687f354e3dde8fb\": not found" Nov 12 22:43:20.403143 kubelet[2639]: I1112 22:43:20.403117 2639 scope.go:117] "RemoveContainer" containerID="f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8" Nov 12 22:43:20.403299 containerd[1472]: time="2024-11-12T22:43:20.403266489Z" level=error msg="ContainerStatus for \"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8\": not found" Nov 12 22:43:20.403464 kubelet[2639]: E1112 22:43:20.403394 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8\": not found" containerID="f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8" Nov 12 22:43:20.403464 kubelet[2639]: I1112 22:43:20.403426 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8"} err="failed to get container status \"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2a565a0b3d228b326481ca921058cb8986f236388934a0d97df6f79af3abfa8\": not found" Nov 12 22:43:20.403464 kubelet[2639]: I1112 22:43:20.403438 2639 scope.go:117] "RemoveContainer" containerID="5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06" Nov 12 22:43:20.403584 containerd[1472]: time="2024-11-12T22:43:20.403542632Z" level=error msg="ContainerStatus for \"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06\": not found" Nov 12 22:43:20.403795 kubelet[2639]: E1112 22:43:20.403766 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06\": not found" containerID="5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06" Nov 12 22:43:20.403851 kubelet[2639]: I1112 22:43:20.403814 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06"} err="failed to get container status \"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ac3154ede252a296cc489ff14359b785ec071cc4fe64637bfc22ac041799a06\": not found" Nov 12 22:43:20.403851 kubelet[2639]: I1112 22:43:20.403833 2639 scope.go:117] "RemoveContainer" containerID="65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a" Nov 12 22:43:20.404930 containerd[1472]: time="2024-11-12T22:43:20.404900733Z" level=info msg="RemoveContainer for \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\"" Nov 12 22:43:20.408615 containerd[1472]: time="2024-11-12T22:43:20.408578845Z" level=info msg="RemoveContainer for \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\" returns successfully" Nov 12 22:43:20.408784 kubelet[2639]: I1112 22:43:20.408755 2639 scope.go:117] "RemoveContainer" containerID="65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a" Nov 12 22:43:20.408964 containerd[1472]: time="2024-11-12T22:43:20.408909611Z" level=error msg="ContainerStatus for \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\": not found" Nov 12 22:43:20.409076 kubelet[2639]: E1112 22:43:20.409048 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\": not found" containerID="65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a" Nov 12 22:43:20.409108 kubelet[2639]: I1112 22:43:20.409086 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a"} err="failed to get container status \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"65ce687bdce966b169c3e78e99f80f36795e40ede7fbc86ad656f943ac30ac0a\": not found" Nov 12 22:43:20.721705 kubelet[2639]: I1112 22:43:20.721578 2639 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="03a0bdc6-fc2e-484e-8071-df99015f9f3a" path="/var/lib/kubelet/pods/03a0bdc6-fc2e-484e-8071-df99015f9f3a/volumes" Nov 12 22:43:20.722233 kubelet[2639]: I1112 22:43:20.722214 2639 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="17e10f0a-edfa-4789-8017-df44930f0e11" path="/var/lib/kubelet/pods/17e10f0a-edfa-4789-8017-df44930f0e11/volumes" Nov 12 22:43:20.806271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1-rootfs.mount: Deactivated successfully. Nov 12 22:43:20.806443 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1-shm.mount: Deactivated successfully. Nov 12 22:43:20.806552 systemd[1]: var-lib-kubelet-pods-03a0bdc6\x2dfc2e\x2d484e\x2d8071\x2ddf99015f9f3a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4vd6k.mount: Deactivated successfully. Nov 12 22:43:20.806668 systemd[1]: var-lib-kubelet-pods-17e10f0a\x2dedfa\x2d4789\x2d8017\x2ddf44930f0e11-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsxw96.mount: Deactivated successfully. Nov 12 22:43:20.806777 systemd[1]: var-lib-kubelet-pods-17e10f0a\x2dedfa\x2d4789\x2d8017\x2ddf44930f0e11-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 22:43:20.806870 systemd[1]: var-lib-kubelet-pods-17e10f0a\x2dedfa\x2d4789\x2d8017\x2ddf44930f0e11-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 22:43:21.743003 sshd[4357]: Connection closed by 10.0.0.1 port 35858 Nov 12 22:43:21.743524 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:21.754118 systemd[1]: sshd@29-10.0.0.30:22-10.0.0.1:35858.service: Deactivated successfully. Nov 12 22:43:21.756078 systemd[1]: session-30.scope: Deactivated successfully. Nov 12 22:43:21.757894 systemd-logind[1445]: Session 30 logged out. Waiting for processes to exit. Nov 12 22:43:21.766620 systemd[1]: Started sshd@30-10.0.0.30:22-10.0.0.1:35868.service - OpenSSH per-connection server daemon (10.0.0.1:35868). Nov 12 22:43:21.767524 systemd-logind[1445]: Removed session 30. Nov 12 22:43:21.803371 sshd[4513]: Accepted publickey for core from 10.0.0.1 port 35868 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:43:21.805447 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:21.809545 systemd-logind[1445]: New session 31 of user core. Nov 12 22:43:21.818474 systemd[1]: Started session-31.scope - Session 31 of User core. Nov 12 22:43:22.295267 sshd[4515]: Connection closed by 10.0.0.1 port 35868 Nov 12 22:43:22.296191 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:22.307932 systemd[1]: sshd@30-10.0.0.30:22-10.0.0.1:35868.service: Deactivated successfully. Nov 12 22:43:22.310957 kubelet[2639]: I1112 22:43:22.310678 2639 topology_manager.go:215] "Topology Admit Handler" podUID="6521e806-3775-4648-85cc-fc5ebcde1074" podNamespace="kube-system" podName="cilium-mgtn9" Nov 12 22:43:22.310957 kubelet[2639]: E1112 22:43:22.310739 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="17e10f0a-edfa-4789-8017-df44930f0e11" containerName="apply-sysctl-overwrites" Nov 12 22:43:22.310957 kubelet[2639]: E1112 22:43:22.310749 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="17e10f0a-edfa-4789-8017-df44930f0e11" containerName="clean-cilium-state" Nov 12 22:43:22.310957 kubelet[2639]: E1112 22:43:22.310755 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03a0bdc6-fc2e-484e-8071-df99015f9f3a" containerName="cilium-operator" Nov 12 22:43:22.310957 kubelet[2639]: E1112 22:43:22.310762 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="17e10f0a-edfa-4789-8017-df44930f0e11" containerName="cilium-agent" Nov 12 22:43:22.310957 kubelet[2639]: E1112 22:43:22.310770 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="17e10f0a-edfa-4789-8017-df44930f0e11" containerName="mount-cgroup" Nov 12 22:43:22.310957 kubelet[2639]: E1112 22:43:22.310778 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="17e10f0a-edfa-4789-8017-df44930f0e11" containerName="mount-bpf-fs" Nov 12 22:43:22.310957 kubelet[2639]: I1112 22:43:22.310802 2639 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a0bdc6-fc2e-484e-8071-df99015f9f3a" containerName="cilium-operator" Nov 12 22:43:22.310957 kubelet[2639]: I1112 22:43:22.310809 2639 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e10f0a-edfa-4789-8017-df44930f0e11" containerName="cilium-agent" Nov 12 22:43:22.311937 systemd[1]: session-31.scope: Deactivated successfully. Nov 12 22:43:22.320002 systemd-logind[1445]: Session 31 logged out. Waiting for processes to exit. Nov 12 22:43:22.332931 systemd[1]: Started sshd@31-10.0.0.30:22-10.0.0.1:35880.service - OpenSSH per-connection server daemon (10.0.0.1:35880). Nov 12 22:43:22.338393 systemd-logind[1445]: Removed session 31. Nov 12 22:43:22.343396 systemd[1]: Created slice kubepods-burstable-pod6521e806_3775_4648_85cc_fc5ebcde1074.slice - libcontainer container kubepods-burstable-pod6521e806_3775_4648_85cc_fc5ebcde1074.slice. Nov 12 22:43:22.370394 sshd[4526]: Accepted publickey for core from 10.0.0.1 port 35880 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:43:22.371971 sshd-session[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:22.376458 systemd-logind[1445]: New session 32 of user core. Nov 12 22:43:22.390627 systemd[1]: Started session-32.scope - Session 32 of User core. Nov 12 22:43:22.398717 kubelet[2639]: I1112 22:43:22.398668 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6521e806-3775-4648-85cc-fc5ebcde1074-hostproc\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.398717 kubelet[2639]: I1112 22:43:22.398712 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6521e806-3775-4648-85cc-fc5ebcde1074-xtables-lock\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.398848 kubelet[2639]: I1112 22:43:22.398733 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6521e806-3775-4648-85cc-fc5ebcde1074-cilium-cgroup\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.398848 kubelet[2639]: I1112 22:43:22.398754 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6521e806-3775-4648-85cc-fc5ebcde1074-host-proc-sys-net\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.398848 kubelet[2639]: I1112 22:43:22.398777 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6521e806-3775-4648-85cc-fc5ebcde1074-cni-path\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.398936 kubelet[2639]: I1112 22:43:22.398862 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6521e806-3775-4648-85cc-fc5ebcde1074-etc-cni-netd\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.398936 kubelet[2639]: I1112 22:43:22.398905 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6521e806-3775-4648-85cc-fc5ebcde1074-host-proc-sys-kernel\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.398991 kubelet[2639]: I1112 22:43:22.398941 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6521e806-3775-4648-85cc-fc5ebcde1074-hubble-tls\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.398991 kubelet[2639]: I1112 22:43:22.398966 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6521e806-3775-4648-85cc-fc5ebcde1074-cilium-config-path\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.399072 kubelet[2639]: I1112 22:43:22.399002 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6521e806-3775-4648-85cc-fc5ebcde1074-bpf-maps\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.399072 kubelet[2639]: I1112 22:43:22.399023 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6521e806-3775-4648-85cc-fc5ebcde1074-cilium-ipsec-secrets\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.399072 kubelet[2639]: I1112 22:43:22.399055 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4qz9\" (UniqueName: \"kubernetes.io/projected/6521e806-3775-4648-85cc-fc5ebcde1074-kube-api-access-b4qz9\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.399237 kubelet[2639]: I1112 22:43:22.399091 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6521e806-3775-4648-85cc-fc5ebcde1074-cilium-run\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.399237 kubelet[2639]: I1112 22:43:22.399136 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6521e806-3775-4648-85cc-fc5ebcde1074-lib-modules\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.399237 kubelet[2639]: I1112 22:43:22.399161 2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6521e806-3775-4648-85cc-fc5ebcde1074-clustermesh-secrets\") pod \"cilium-mgtn9\" (UID: \"6521e806-3775-4648-85cc-fc5ebcde1074\") " pod="kube-system/cilium-mgtn9" Nov 12 22:43:22.442145 sshd[4528]: Connection closed by 10.0.0.1 port 35880 Nov 12 22:43:22.442522 sshd-session[4526]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:22.450362 systemd[1]: sshd@31-10.0.0.30:22-10.0.0.1:35880.service: Deactivated successfully. Nov 12 22:43:22.452405 systemd[1]: session-32.scope: Deactivated successfully. Nov 12 22:43:22.454291 systemd-logind[1445]: Session 32 logged out. Waiting for processes to exit. Nov 12 22:43:22.464602 systemd[1]: Started sshd@32-10.0.0.30:22-10.0.0.1:35892.service - OpenSSH per-connection server daemon (10.0.0.1:35892). Nov 12 22:43:22.465472 systemd-logind[1445]: Removed session 32. Nov 12 22:43:22.502831 sshd[4534]: Accepted publickey for core from 10.0.0.1 port 35892 ssh2: RSA SHA256:rSKLFAfGlie+bJ0W3A4NoitiLDuk9v8wzkt4NjKC2S8 Nov 12 22:43:22.504518 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:22.521308 systemd-logind[1445]: New session 33 of user core. Nov 12 22:43:22.527650 systemd[1]: Started session-33.scope - Session 33 of User core. Nov 12 22:43:22.646834 kubelet[2639]: E1112 22:43:22.646667 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:22.647758 containerd[1472]: time="2024-11-12T22:43:22.647288439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgtn9,Uid:6521e806-3775-4648-85cc-fc5ebcde1074,Namespace:kube-system,Attempt:0,}" Nov 12 22:43:22.670648 containerd[1472]: time="2024-11-12T22:43:22.670369465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:43:22.670648 containerd[1472]: time="2024-11-12T22:43:22.670511024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:43:22.670648 containerd[1472]: time="2024-11-12T22:43:22.670529018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:22.670900 containerd[1472]: time="2024-11-12T22:43:22.670862569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:22.693489 systemd[1]: Started cri-containerd-d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60.scope - libcontainer container d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60. Nov 12 22:43:22.720505 containerd[1472]: time="2024-11-12T22:43:22.720451506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgtn9,Uid:6521e806-3775-4648-85cc-fc5ebcde1074,Namespace:kube-system,Attempt:0,} returns sandbox id \"d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60\"" Nov 12 22:43:22.721236 kubelet[2639]: E1112 22:43:22.721210 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:22.723296 containerd[1472]: time="2024-11-12T22:43:22.723247888Z" level=info msg="CreateContainer within sandbox \"d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:43:22.744030 containerd[1472]: time="2024-11-12T22:43:22.743971895Z" level=info msg="CreateContainer within sandbox \"d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"05add12da9518fb4e0f3e6dbec1345f231e16940ca6de0bbf0c00ae88209f651\"" Nov 12 22:43:22.745387 containerd[1472]: time="2024-11-12T22:43:22.744488452Z" level=info msg="StartContainer for \"05add12da9518fb4e0f3e6dbec1345f231e16940ca6de0bbf0c00ae88209f651\"" Nov 12 22:43:22.777740 systemd[1]: Started cri-containerd-05add12da9518fb4e0f3e6dbec1345f231e16940ca6de0bbf0c00ae88209f651.scope - libcontainer container 05add12da9518fb4e0f3e6dbec1345f231e16940ca6de0bbf0c00ae88209f651. Nov 12 22:43:22.806311 containerd[1472]: time="2024-11-12T22:43:22.806250565Z" level=info msg="StartContainer for \"05add12da9518fb4e0f3e6dbec1345f231e16940ca6de0bbf0c00ae88209f651\" returns successfully" Nov 12 22:43:22.817493 systemd[1]: cri-containerd-05add12da9518fb4e0f3e6dbec1345f231e16940ca6de0bbf0c00ae88209f651.scope: Deactivated successfully. Nov 12 22:43:22.849939 containerd[1472]: time="2024-11-12T22:43:22.849853324Z" level=info msg="shim disconnected" id=05add12da9518fb4e0f3e6dbec1345f231e16940ca6de0bbf0c00ae88209f651 namespace=k8s.io Nov 12 22:43:22.849939 containerd[1472]: time="2024-11-12T22:43:22.849909000Z" level=warning msg="cleaning up after shim disconnected" id=05add12da9518fb4e0f3e6dbec1345f231e16940ca6de0bbf0c00ae88209f651 namespace=k8s.io Nov 12 22:43:22.849939 containerd[1472]: time="2024-11-12T22:43:22.849917516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:23.367490 kubelet[2639]: E1112 22:43:23.367454 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:23.370071 containerd[1472]: time="2024-11-12T22:43:23.369997765Z" level=info msg="CreateContainer within sandbox \"d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:43:23.383453 containerd[1472]: time="2024-11-12T22:43:23.383380836Z" level=info msg="CreateContainer within sandbox \"d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e081e04cc7c1e24a7153299c162a8c3587808cdb669a7555c3d877fc9805f55\"" Nov 12 22:43:23.383861 containerd[1472]: time="2024-11-12T22:43:23.383831289Z" level=info msg="StartContainer for \"2e081e04cc7c1e24a7153299c162a8c3587808cdb669a7555c3d877fc9805f55\"" Nov 12 22:43:23.414500 systemd[1]: Started cri-containerd-2e081e04cc7c1e24a7153299c162a8c3587808cdb669a7555c3d877fc9805f55.scope - libcontainer container 2e081e04cc7c1e24a7153299c162a8c3587808cdb669a7555c3d877fc9805f55. Nov 12 22:43:23.445123 containerd[1472]: time="2024-11-12T22:43:23.445051589Z" level=info msg="StartContainer for \"2e081e04cc7c1e24a7153299c162a8c3587808cdb669a7555c3d877fc9805f55\" returns successfully" Nov 12 22:43:23.450433 systemd[1]: cri-containerd-2e081e04cc7c1e24a7153299c162a8c3587808cdb669a7555c3d877fc9805f55.scope: Deactivated successfully. Nov 12 22:43:23.475296 containerd[1472]: time="2024-11-12T22:43:23.475224548Z" level=info msg="shim disconnected" id=2e081e04cc7c1e24a7153299c162a8c3587808cdb669a7555c3d877fc9805f55 namespace=k8s.io Nov 12 22:43:23.475296 containerd[1472]: time="2024-11-12T22:43:23.475280894Z" level=warning msg="cleaning up after shim disconnected" id=2e081e04cc7c1e24a7153299c162a8c3587808cdb669a7555c3d877fc9805f55 namespace=k8s.io Nov 12 22:43:23.475296 containerd[1472]: time="2024-11-12T22:43:23.475289120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:24.370812 kubelet[2639]: E1112 22:43:24.370767 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:24.372584 containerd[1472]: time="2024-11-12T22:43:24.372538931Z" level=info msg="CreateContainer within sandbox \"d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:43:24.394687 containerd[1472]: time="2024-11-12T22:43:24.394626625Z" level=info msg="CreateContainer within sandbox \"d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c0a05c06cc26ce2246760f270425cb9e6175dd563eba73ce18a91c6fc0726e5e\"" Nov 12 22:43:24.397361 containerd[1472]: time="2024-11-12T22:43:24.395297203Z" level=info msg="StartContainer for \"c0a05c06cc26ce2246760f270425cb9e6175dd563eba73ce18a91c6fc0726e5e\"" Nov 12 22:43:24.433523 systemd[1]: Started cri-containerd-c0a05c06cc26ce2246760f270425cb9e6175dd563eba73ce18a91c6fc0726e5e.scope - libcontainer container c0a05c06cc26ce2246760f270425cb9e6175dd563eba73ce18a91c6fc0726e5e. Nov 12 22:43:24.468864 containerd[1472]: time="2024-11-12T22:43:24.468814861Z" level=info msg="StartContainer for \"c0a05c06cc26ce2246760f270425cb9e6175dd563eba73ce18a91c6fc0726e5e\" returns successfully" Nov 12 22:43:24.470799 systemd[1]: cri-containerd-c0a05c06cc26ce2246760f270425cb9e6175dd563eba73ce18a91c6fc0726e5e.scope: Deactivated successfully. Nov 12 22:43:24.497299 containerd[1472]: time="2024-11-12T22:43:24.497225968Z" level=info msg="shim disconnected" id=c0a05c06cc26ce2246760f270425cb9e6175dd563eba73ce18a91c6fc0726e5e namespace=k8s.io Nov 12 22:43:24.497299 containerd[1472]: time="2024-11-12T22:43:24.497286843Z" level=warning msg="cleaning up after shim disconnected" id=c0a05c06cc26ce2246760f270425cb9e6175dd563eba73ce18a91c6fc0726e5e namespace=k8s.io Nov 12 22:43:24.497299 containerd[1472]: time="2024-11-12T22:43:24.497296261Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:24.504960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0a05c06cc26ce2246760f270425cb9e6175dd563eba73ce18a91c6fc0726e5e-rootfs.mount: Deactivated successfully. Nov 12 22:43:24.732689 containerd[1472]: time="2024-11-12T22:43:24.732570784Z" level=info msg="StopPodSandbox for \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\"" Nov 12 22:43:24.736831 containerd[1472]: time="2024-11-12T22:43:24.732672776Z" level=info msg="TearDown network for sandbox \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\" successfully" Nov 12 22:43:24.736831 containerd[1472]: time="2024-11-12T22:43:24.736823719Z" level=info msg="StopPodSandbox for \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\" returns successfully" Nov 12 22:43:24.737199 containerd[1472]: time="2024-11-12T22:43:24.737168972Z" level=info msg="RemovePodSandbox for \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\"" Nov 12 22:43:24.737199 containerd[1472]: time="2024-11-12T22:43:24.737198968Z" level=info msg="Forcibly stopping sandbox \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\"" Nov 12 22:43:24.737283 containerd[1472]: time="2024-11-12T22:43:24.737251578Z" level=info msg="TearDown network for sandbox \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\" successfully" Nov 12 22:43:24.740678 containerd[1472]: time="2024-11-12T22:43:24.740649256Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:43:24.740761 containerd[1472]: time="2024-11-12T22:43:24.740683190Z" level=info msg="RemovePodSandbox \"12c1c3d5e2426d1546df9ed7a3af0430031d9a063a4550ad6ec2aac7cf8d30e4\" returns successfully" Nov 12 22:43:24.740964 containerd[1472]: time="2024-11-12T22:43:24.740932201Z" level=info msg="StopPodSandbox for \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\"" Nov 12 22:43:24.741035 containerd[1472]: time="2024-11-12T22:43:24.741010409Z" level=info msg="TearDown network for sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" successfully" Nov 12 22:43:24.741073 containerd[1472]: time="2024-11-12T22:43:24.741036238Z" level=info msg="StopPodSandbox for \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" returns successfully" Nov 12 22:43:24.741279 containerd[1472]: time="2024-11-12T22:43:24.741247718Z" level=info msg="RemovePodSandbox for \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\"" Nov 12 22:43:24.741279 containerd[1472]: time="2024-11-12T22:43:24.741273055Z" level=info msg="Forcibly stopping sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\"" Nov 12 22:43:24.741408 containerd[1472]: time="2024-11-12T22:43:24.741334291Z" level=info msg="TearDown network for sandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" successfully" Nov 12 22:43:24.744618 containerd[1472]: time="2024-11-12T22:43:24.744579881Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:43:24.744618 containerd[1472]: time="2024-11-12T22:43:24.744616331Z" level=info msg="RemovePodSandbox \"b8121ff2a04541b3e05a208cd0db74a51b23af191dadb363ac5534c8d58089a1\" returns successfully" Nov 12 22:43:24.810739 kubelet[2639]: E1112 22:43:24.810709 2639 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 22:43:25.374109 kubelet[2639]: E1112 22:43:25.374074 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:25.376477 containerd[1472]: time="2024-11-12T22:43:25.376410533Z" level=info msg="CreateContainer within sandbox \"d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:43:25.453103 containerd[1472]: time="2024-11-12T22:43:25.453050407Z" level=info msg="CreateContainer within sandbox \"d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6e3783c7b98da2d471224ddecfd2b9f21d04cc97f9493febcdde276b48a7a16e\"" Nov 12 22:43:25.454633 containerd[1472]: time="2024-11-12T22:43:25.453690798Z" level=info msg="StartContainer for \"6e3783c7b98da2d471224ddecfd2b9f21d04cc97f9493febcdde276b48a7a16e\"" Nov 12 22:43:25.488487 systemd[1]: Started cri-containerd-6e3783c7b98da2d471224ddecfd2b9f21d04cc97f9493febcdde276b48a7a16e.scope - libcontainer container 6e3783c7b98da2d471224ddecfd2b9f21d04cc97f9493febcdde276b48a7a16e. Nov 12 22:43:25.513508 systemd[1]: cri-containerd-6e3783c7b98da2d471224ddecfd2b9f21d04cc97f9493febcdde276b48a7a16e.scope: Deactivated successfully. Nov 12 22:43:25.515662 containerd[1472]: time="2024-11-12T22:43:25.515620258Z" level=info msg="StartContainer for \"6e3783c7b98da2d471224ddecfd2b9f21d04cc97f9493febcdde276b48a7a16e\" returns successfully" Nov 12 22:43:25.536265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e3783c7b98da2d471224ddecfd2b9f21d04cc97f9493febcdde276b48a7a16e-rootfs.mount: Deactivated successfully. Nov 12 22:43:25.543115 containerd[1472]: time="2024-11-12T22:43:25.543059068Z" level=info msg="shim disconnected" id=6e3783c7b98da2d471224ddecfd2b9f21d04cc97f9493febcdde276b48a7a16e namespace=k8s.io Nov 12 22:43:25.543115 containerd[1472]: time="2024-11-12T22:43:25.543110855Z" level=warning msg="cleaning up after shim disconnected" id=6e3783c7b98da2d471224ddecfd2b9f21d04cc97f9493febcdde276b48a7a16e namespace=k8s.io Nov 12 22:43:25.543426 containerd[1472]: time="2024-11-12T22:43:25.543119572Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:26.378627 kubelet[2639]: E1112 22:43:26.378577 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:26.381548 containerd[1472]: time="2024-11-12T22:43:26.381486101Z" level=info msg="CreateContainer within sandbox \"d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:43:26.401804 containerd[1472]: time="2024-11-12T22:43:26.401741980Z" level=info msg="CreateContainer within sandbox \"d09c151a65c904518cdb9751d00c15c9a1c9858e7c8acc472c6cc16f93bb6d60\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f760cb2c3ef5806fc264c32784fcd76dec01e18d8a50f8c3c2a5dcbdede55e3a\"" Nov 12 22:43:26.402496 containerd[1472]: time="2024-11-12T22:43:26.402423118Z" level=info msg="StartContainer for \"f760cb2c3ef5806fc264c32784fcd76dec01e18d8a50f8c3c2a5dcbdede55e3a\"" Nov 12 22:43:26.439498 systemd[1]: Started cri-containerd-f760cb2c3ef5806fc264c32784fcd76dec01e18d8a50f8c3c2a5dcbdede55e3a.scope - libcontainer container f760cb2c3ef5806fc264c32784fcd76dec01e18d8a50f8c3c2a5dcbdede55e3a. Nov 12 22:43:26.493886 containerd[1472]: time="2024-11-12T22:43:26.493811476Z" level=info msg="StartContainer for \"f760cb2c3ef5806fc264c32784fcd76dec01e18d8a50f8c3c2a5dcbdede55e3a\" returns successfully" Nov 12 22:43:27.011513 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 12 22:43:27.384258 kubelet[2639]: E1112 22:43:27.384224 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:27.396561 kubelet[2639]: I1112 22:43:27.396498 2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mgtn9" podStartSLOduration=5.396449434 podStartE2EDuration="5.396449434s" podCreationTimestamp="2024-11-12 22:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:43:27.395994724 +0000 UTC m=+123.064339110" watchObservedRunningTime="2024-11-12 22:43:27.396449434 +0000 UTC m=+123.064793820" Nov 12 22:43:27.431740 kubelet[2639]: I1112 22:43:27.431690 2639 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-12T22:43:27Z","lastTransitionTime":"2024-11-12T22:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 12 22:43:28.648711 kubelet[2639]: E1112 22:43:28.648663 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:30.240528 systemd-networkd[1403]: lxc_health: Link UP Nov 12 22:43:30.244730 systemd-networkd[1403]: lxc_health: Gained carrier Nov 12 22:43:30.649523 kubelet[2639]: E1112 22:43:30.649460 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:31.391525 kubelet[2639]: E1112 22:43:31.391478 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:31.935619 systemd-networkd[1403]: lxc_health: Gained IPv6LL Nov 12 22:43:32.392974 kubelet[2639]: E1112 22:43:32.392927 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:36.719206 kubelet[2639]: E1112 22:43:36.719142 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:37.343522 sshd[4541]: Connection closed by 10.0.0.1 port 35892 Nov 12 22:43:37.344014 sshd-session[4534]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:37.350022 systemd[1]: sshd@32-10.0.0.30:22-10.0.0.1:35892.service: Deactivated successfully. Nov 12 22:43:37.352600 systemd[1]: session-33.scope: Deactivated successfully. Nov 12 22:43:37.353621 systemd-logind[1445]: Session 33 logged out. Waiting for processes to exit. Nov 12 22:43:37.354802 systemd-logind[1445]: Removed session 33. Nov 12 22:43:37.718801 kubelet[2639]: E1112 22:43:37.718655 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"