Dec 13 01:56:28.849128 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 01:56:28.849145 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:56:28.849155 kernel: BIOS-provided physical RAM map: Dec 13 01:56:28.849160 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:56:28.849166 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:56:28.849171 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:56:28.849178 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:56:28.849184 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:56:28.849189 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:56:28.849196 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:56:28.849201 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:56:28.849206 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 01:56:28.849212 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:56:28.849217 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:56:28.849224 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:56:28.849231 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:56:28.849237 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:56:28.849243 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:56:28.849249 kernel: NX (Execute Disable) protection: active Dec 13 01:56:28.849255 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 01:56:28.849261 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 01:56:28.849266 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 01:56:28.849272 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 01:56:28.849278 kernel: extended physical RAM map: Dec 13 01:56:28.849283 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:56:28.849291 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:56:28.849297 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:56:28.849304 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:56:28.849312 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:56:28.849320 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:56:28.849327 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:56:28.849335 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Dec 13 01:56:28.849343 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Dec 13 01:56:28.849350 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Dec 13 01:56:28.849357 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Dec 13 01:56:28.849363 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Dec 13 01:56:28.849370 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 01:56:28.849376 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:56:28.849382 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:56:28.849388 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:56:28.849396 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:56:28.849403 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:56:28.849409 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:56:28.849416 kernel: efi: EFI v2.70 by EDK II Dec 13 01:56:28.849422 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Dec 13 01:56:28.849429 kernel: random: crng init done Dec 13 01:56:28.849435 kernel: SMBIOS 2.8 present. Dec 13 01:56:28.849460 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:56:28.849467 kernel: Hypervisor detected: KVM Dec 13 01:56:28.849489 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:56:28.849495 kernel: kvm-clock: cpu 0, msr 1c19b001, primary cpu clock Dec 13 01:56:28.849502 kernel: kvm-clock: using sched offset of 4065442498 cycles Dec 13 01:56:28.849510 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:56:28.849517 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:56:28.849524 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:56:28.849530 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:56:28.849537 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:56:28.849543 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:56:28.849550 kernel: Using GB pages for direct mapping Dec 13 01:56:28.849556 kernel: Secure boot disabled Dec 13 01:56:28.849562 kernel: ACPI: Early table checksum verification disabled Dec 13 01:56:28.849570 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:56:28.849576 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:56:28.849583 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:28.849589 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:28.849596 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:56:28.849602 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:28.849609 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:28.849615 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:28.849622 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:28.849629 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:56:28.849636 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:56:28.849642 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:56:28.849649 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:56:28.849655 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:56:28.849662 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:56:28.849670 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:56:28.849677 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:56:28.849685 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:56:28.849693 kernel: No NUMA configuration found Dec 13 01:56:28.849699 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:56:28.849706 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:56:28.849712 kernel: Zone ranges: Dec 13 01:56:28.849719 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:56:28.849725 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:56:28.849731 kernel: Normal empty Dec 13 01:56:28.849738 kernel: Movable zone start for each node Dec 13 01:56:28.849744 kernel: Early memory node ranges Dec 13 01:56:28.849751 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:56:28.849758 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:56:28.849764 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:56:28.849770 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:56:28.849785 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:56:28.849791 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:56:28.849798 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:56:28.849804 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:56:28.849811 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:56:28.849817 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:56:28.849825 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:56:28.849831 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:56:28.849838 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:56:28.849844 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:56:28.849851 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:56:28.849857 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:56:28.849864 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:56:28.849870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:56:28.849877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:56:28.849884 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:56:28.849891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:56:28.849897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:56:28.849903 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:56:28.849910 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:56:28.849916 kernel: TSC deadline timer available Dec 13 01:56:28.849923 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:56:28.849929 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:56:28.849935 kernel: kvm-guest: setup PV sched yield Dec 13 01:56:28.849943 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:56:28.849950 kernel: Booting paravirtualized kernel on KVM Dec 13 01:56:28.849961 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:56:28.849969 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:56:28.849975 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 01:56:28.849982 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 01:56:28.849989 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:56:28.849995 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 01:56:28.850002 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Dec 13 01:56:28.850009 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:56:28.850015 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:56:28.850022 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:56:28.850030 kernel: Policy zone: DMA32 Dec 13 01:56:28.850038 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:56:28.850045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:56:28.850052 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:56:28.850060 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:56:28.850067 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:56:28.850074 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 169308K reserved, 0K cma-reserved) Dec 13 01:56:28.850081 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:56:28.850088 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 01:56:28.850095 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 01:56:28.850102 kernel: rcu: Hierarchical RCU implementation. Dec 13 01:56:28.850109 kernel: rcu: RCU event tracing is enabled. Dec 13 01:56:28.850116 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:56:28.850124 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:56:28.850131 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:56:28.850138 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:56:28.850145 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:56:28.850151 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:56:28.850158 kernel: Console: colour dummy device 80x25 Dec 13 01:56:28.850165 kernel: printk: console [ttyS0] enabled Dec 13 01:56:28.850171 kernel: ACPI: Core revision 20210730 Dec 13 01:56:28.850178 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:56:28.850187 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:56:28.850194 kernel: x2apic enabled Dec 13 01:56:28.850201 kernel: Switched APIC routing to physical x2apic. Dec 13 01:56:28.850207 kernel: kvm-guest: setup PV IPIs Dec 13 01:56:28.850214 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:56:28.850221 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:56:28.850228 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:56:28.850235 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:56:28.850242 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:56:28.850250 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:56:28.850266 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:56:28.850273 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:56:28.850279 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:56:28.850286 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:56:28.850300 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:56:28.850313 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:56:28.850325 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:56:28.850348 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 01:56:28.850373 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:56:28.850383 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:56:28.850392 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:56:28.850399 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:56:28.850406 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:56:28.850412 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:56:28.850419 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:56:28.850426 kernel: LSM: Security Framework initializing Dec 13 01:56:28.850433 kernel: SELinux: Initializing. Dec 13 01:56:28.850459 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:56:28.850469 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:56:28.850476 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:56:28.850483 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:56:28.850490 kernel: ... version: 0 Dec 13 01:56:28.850497 kernel: ... bit width: 48 Dec 13 01:56:28.850504 kernel: ... generic registers: 6 Dec 13 01:56:28.850510 kernel: ... value mask: 0000ffffffffffff Dec 13 01:56:28.850517 kernel: ... max period: 00007fffffffffff Dec 13 01:56:28.850525 kernel: ... fixed-purpose events: 0 Dec 13 01:56:28.850532 kernel: ... event mask: 000000000000003f Dec 13 01:56:28.850539 kernel: signal: max sigframe size: 1776 Dec 13 01:56:28.850554 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:56:28.850561 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:56:28.850568 kernel: x86: Booting SMP configuration: Dec 13 01:56:28.850574 kernel: .... node #0, CPUs: #1 Dec 13 01:56:28.850581 kernel: kvm-clock: cpu 1, msr 1c19b041, secondary cpu clock Dec 13 01:56:28.850588 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 01:56:28.850596 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Dec 13 01:56:28.850604 kernel: #2 Dec 13 01:56:28.850613 kernel: kvm-clock: cpu 2, msr 1c19b081, secondary cpu clock Dec 13 01:56:28.850622 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 01:56:28.850631 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Dec 13 01:56:28.850640 kernel: #3 Dec 13 01:56:28.850649 kernel: kvm-clock: cpu 3, msr 1c19b0c1, secondary cpu clock Dec 13 01:56:28.850658 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 01:56:28.850667 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Dec 13 01:56:28.850676 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:56:28.850684 kernel: smpboot: Max logical packages: 1 Dec 13 01:56:28.850691 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:56:28.850698 kernel: devtmpfs: initialized Dec 13 01:56:28.850705 kernel: x86/mm: Memory block size: 128MB Dec 13 01:56:28.850712 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:56:28.850719 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:56:28.850727 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:56:28.850736 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:56:28.850745 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:56:28.850754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:56:28.850761 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:56:28.850768 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:56:28.850782 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:56:28.850789 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:56:28.850798 kernel: audit: type=2000 audit(1734054989.065:1): state=initialized audit_enabled=0 res=1 Dec 13 01:56:28.850807 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:56:28.850816 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:56:28.850824 kernel: cpuidle: using governor menu Dec 13 01:56:28.850831 kernel: ACPI: bus type PCI registered Dec 13 01:56:28.850838 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:56:28.850844 kernel: dca service started, version 1.12.1 Dec 13 01:56:28.850851 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:56:28.850858 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 01:56:28.850865 kernel: PCI: Using configuration type 1 for base access Dec 13 01:56:28.850872 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:56:28.850882 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:56:28.850893 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:56:28.850902 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:56:28.850910 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:56:28.850916 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:56:28.850923 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:56:28.850930 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 01:56:28.850936 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 01:56:28.850943 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 01:56:28.850950 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:56:28.850958 kernel: ACPI: Interpreter enabled Dec 13 01:56:28.850965 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:56:28.850972 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:56:28.850978 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:56:28.850985 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:56:28.850992 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:56:28.851113 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:56:28.851213 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:56:28.851291 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:56:28.851301 kernel: PCI host bridge to bus 0000:00 Dec 13 01:56:28.851375 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:56:28.851438 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:56:28.851516 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:56:28.851576 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:56:28.851637 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:56:28.851700 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:56:28.851761 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:56:28.851852 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:56:28.851931 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:56:28.852001 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:56:28.852070 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:56:28.852144 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:56:28.852214 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:56:28.852283 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:56:28.852360 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:56:28.852433 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:56:28.852516 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:56:28.852584 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:56:28.852668 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:56:28.852739 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:56:28.852817 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:56:28.852886 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:56:28.852963 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:56:28.853033 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:56:28.853102 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:56:28.853173 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:56:28.853243 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:56:28.853317 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:56:28.853386 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:56:28.853487 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:56:28.853562 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:56:28.853733 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:56:28.853824 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:56:28.853894 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:56:28.853904 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:56:28.853911 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:56:28.853918 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:56:28.853925 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:56:28.853931 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:56:28.853938 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:56:28.853947 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:56:28.853954 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:56:28.853961 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:56:28.853968 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:56:28.853975 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:56:28.853981 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:56:28.853988 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:56:28.853995 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:56:28.854002 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:56:28.854010 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:56:28.854016 kernel: iommu: Default domain type: Translated Dec 13 01:56:28.854023 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:56:28.854094 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:56:28.854162 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:56:28.854230 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:56:28.854240 kernel: vgaarb: loaded Dec 13 01:56:28.854247 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:56:28.854257 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:56:28.854263 kernel: PTP clock support registered Dec 13 01:56:28.854270 kernel: Registered efivars operations Dec 13 01:56:28.854277 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:56:28.854284 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:56:28.854290 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:56:28.854297 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:56:28.854304 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Dec 13 01:56:28.854310 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Dec 13 01:56:28.854317 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:56:28.854325 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:56:28.854332 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:56:28.854339 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:56:28.854345 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:56:28.854352 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:56:28.854359 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:56:28.854366 kernel: pnp: PnP ACPI init Dec 13 01:56:28.854438 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:56:28.854485 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:56:28.854492 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:56:28.854499 kernel: NET: Registered PF_INET protocol family Dec 13 01:56:28.854506 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:56:28.854513 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:56:28.854520 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:56:28.854526 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:56:28.854533 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 01:56:28.854542 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:56:28.854549 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:56:28.854556 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:56:28.854563 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:56:28.854570 kernel: NET: Registered PF_XDP protocol family Dec 13 01:56:28.854643 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:56:28.854713 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:56:28.854782 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:56:28.854847 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:56:28.854909 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:56:28.854967 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:56:28.855028 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:56:28.855088 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:56:28.855098 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:56:28.855105 kernel: Initialise system trusted keyrings Dec 13 01:56:28.855112 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:56:28.855119 kernel: Key type asymmetric registered Dec 13 01:56:28.855127 kernel: Asymmetric key parser 'x509' registered Dec 13 01:56:28.855134 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 01:56:28.855149 kernel: io scheduler mq-deadline registered Dec 13 01:56:28.855157 kernel: io scheduler kyber registered Dec 13 01:56:28.855165 kernel: io scheduler bfq registered Dec 13 01:56:28.855172 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:56:28.855179 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:56:28.855187 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:56:28.855194 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:56:28.855202 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:56:28.855209 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:56:28.855216 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:56:28.855223 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:56:28.855231 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:56:28.855302 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:56:28.855312 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:56:28.855379 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:56:28.855457 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:56:28 UTC (1734054988) Dec 13 01:56:28.855523 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:56:28.855532 kernel: efifb: probing for efifb Dec 13 01:56:28.855540 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 01:56:28.855547 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 01:56:28.855554 kernel: efifb: scrolling: redraw Dec 13 01:56:28.855563 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:56:28.855570 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 01:56:28.855578 kernel: fb0: EFI VGA frame buffer device Dec 13 01:56:28.855595 kernel: pstore: Registered efi as persistent store backend Dec 13 01:56:28.855603 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:56:28.855610 kernel: Segment Routing with IPv6 Dec 13 01:56:28.855619 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:56:28.855633 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:56:28.855650 kernel: Key type dns_resolver registered Dec 13 01:56:28.855660 kernel: IPI shorthand broadcast: enabled Dec 13 01:56:28.855667 kernel: sched_clock: Marking stable (445099699, 127522369)->(617329714, -44707646) Dec 13 01:56:28.855674 kernel: registered taskstats version 1 Dec 13 01:56:28.855684 kernel: Loading compiled-in X.509 certificates Dec 13 01:56:28.855692 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 01:56:28.855699 kernel: Key type .fscrypt registered Dec 13 01:56:28.855706 kernel: Key type fscrypt-provisioning registered Dec 13 01:56:28.855713 kernel: pstore: Using crash dump compression: deflate Dec 13 01:56:28.855722 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:56:28.855729 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:56:28.855736 kernel: ima: No architecture policies found Dec 13 01:56:28.855743 kernel: clk: Disabling unused clocks Dec 13 01:56:28.855750 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 01:56:28.855757 kernel: Write protecting the kernel read-only data: 28672k Dec 13 01:56:28.855765 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 01:56:28.855780 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 01:56:28.855787 kernel: Run /init as init process Dec 13 01:56:28.855795 kernel: with arguments: Dec 13 01:56:28.855802 kernel: /init Dec 13 01:56:28.855809 kernel: with environment: Dec 13 01:56:28.855816 kernel: HOME=/ Dec 13 01:56:28.855823 kernel: TERM=linux Dec 13 01:56:28.855830 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:56:28.855839 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:56:28.855848 systemd[1]: Detected virtualization kvm. Dec 13 01:56:28.855857 systemd[1]: Detected architecture x86-64. Dec 13 01:56:28.855868 systemd[1]: Running in initrd. Dec 13 01:56:28.855876 systemd[1]: No hostname configured, using default hostname. Dec 13 01:56:28.855883 systemd[1]: Hostname set to . Dec 13 01:56:28.855891 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:56:28.855898 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:56:28.855906 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:56:28.855913 systemd[1]: Reached target cryptsetup.target. Dec 13 01:56:28.855922 systemd[1]: Reached target paths.target. Dec 13 01:56:28.855930 systemd[1]: Reached target slices.target. Dec 13 01:56:28.855937 systemd[1]: Reached target swap.target. Dec 13 01:56:28.855945 systemd[1]: Reached target timers.target. Dec 13 01:56:28.855953 systemd[1]: Listening on iscsid.socket. Dec 13 01:56:28.855960 systemd[1]: Listening on iscsiuio.socket. Dec 13 01:56:28.855968 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:56:28.855975 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:56:28.855984 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:56:28.855992 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:56:28.855999 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:56:28.856007 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:56:28.856014 systemd[1]: Reached target sockets.target. Dec 13 01:56:28.856022 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:56:28.856029 systemd[1]: Finished network-cleanup.service. Dec 13 01:56:28.856037 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:56:28.856044 systemd[1]: Starting systemd-journald.service... Dec 13 01:56:28.856053 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:56:28.856060 systemd[1]: Starting systemd-resolved.service... Dec 13 01:56:28.856068 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 01:56:28.856075 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:56:28.856083 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:56:28.856090 kernel: audit: type=1130 audit(1734054988.850:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.856098 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 01:56:28.856106 kernel: audit: type=1130 audit(1734054988.855:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.856119 systemd-journald[198]: Journal started Dec 13 01:56:28.856157 systemd-journald[198]: Runtime Journal (/run/log/journal/18816da96b844a3aa09b5e2a89abbd54) is 6.0M, max 48.4M, 42.4M free. Dec 13 01:56:28.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.853450 systemd-modules-load[199]: Inserted module 'overlay' Dec 13 01:56:28.861547 systemd[1]: Started systemd-journald.service. Dec 13 01:56:28.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.862317 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 01:56:28.866862 kernel: audit: type=1130 audit(1734054988.861:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.867328 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:56:28.874878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:56:28.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.879463 kernel: audit: type=1130 audit(1734054988.874:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.881078 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 01:56:28.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.886497 kernel: audit: type=1130 audit(1734054988.881:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.882756 systemd[1]: Starting dracut-cmdline.service... Dec 13 01:56:28.891058 systemd-resolved[200]: Positive Trust Anchors: Dec 13 01:56:28.892148 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:56:28.891316 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:56:28.891346 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:56:28.895071 systemd-resolved[200]: Defaulting to hostname 'linux'. Dec 13 01:56:28.905763 kernel: Bridge firewalling registered Dec 13 01:56:28.905794 kernel: audit: type=1130 audit(1734054988.900:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.905857 dracut-cmdline[216]: dracut-dracut-053 Dec 13 01:56:28.895786 systemd[1]: Started systemd-resolved.service. Dec 13 01:56:28.900981 systemd-modules-load[199]: Inserted module 'br_netfilter' Dec 13 01:56:28.901053 systemd[1]: Reached target nss-lookup.target. Dec 13 01:56:28.910152 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:56:28.922468 kernel: SCSI subsystem initialized Dec 13 01:56:28.933648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:56:28.933670 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:56:28.934917 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 01:56:28.937607 systemd-modules-load[199]: Inserted module 'dm_multipath' Dec 13 01:56:28.938324 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:56:28.943184 kernel: audit: type=1130 audit(1734054988.938:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.939332 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:56:28.948333 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:56:28.952937 kernel: audit: type=1130 audit(1734054988.949:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:28.977466 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:56:28.993474 kernel: iscsi: registered transport (tcp) Dec 13 01:56:29.014462 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:56:29.014491 kernel: QLogic iSCSI HBA Driver Dec 13 01:56:29.044739 systemd[1]: Finished dracut-cmdline.service. Dec 13 01:56:29.050273 kernel: audit: type=1130 audit(1734054989.045:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:29.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:29.046473 systemd[1]: Starting dracut-pre-udev.service... Dec 13 01:56:29.091481 kernel: raid6: avx2x4 gen() 30906 MB/s Dec 13 01:56:29.108472 kernel: raid6: avx2x4 xor() 7792 MB/s Dec 13 01:56:29.125470 kernel: raid6: avx2x2 gen() 32219 MB/s Dec 13 01:56:29.142467 kernel: raid6: avx2x2 xor() 18719 MB/s Dec 13 01:56:29.159470 kernel: raid6: avx2x1 gen() 25899 MB/s Dec 13 01:56:29.176494 kernel: raid6: avx2x1 xor() 15235 MB/s Dec 13 01:56:29.193485 kernel: raid6: sse2x4 gen() 14332 MB/s Dec 13 01:56:29.210615 kernel: raid6: sse2x4 xor() 7239 MB/s Dec 13 01:56:29.227489 kernel: raid6: sse2x2 gen() 15558 MB/s Dec 13 01:56:29.244557 kernel: raid6: sse2x2 xor() 7557 MB/s Dec 13 01:56:29.261505 kernel: raid6: sse2x1 gen() 12118 MB/s Dec 13 01:56:29.278925 kernel: raid6: sse2x1 xor() 7543 MB/s Dec 13 01:56:29.278994 kernel: raid6: using algorithm avx2x2 gen() 32219 MB/s Dec 13 01:56:29.279003 kernel: raid6: .... xor() 18719 MB/s, rmw enabled Dec 13 01:56:29.279637 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:56:29.292482 kernel: xor: automatically using best checksumming function avx Dec 13 01:56:29.385490 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 01:56:29.393291 systemd[1]: Finished dracut-pre-udev.service. Dec 13 01:56:29.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:29.395000 audit: BPF prog-id=7 op=LOAD Dec 13 01:56:29.395000 audit: BPF prog-id=8 op=LOAD Dec 13 01:56:29.396280 systemd[1]: Starting systemd-udevd.service... Dec 13 01:56:29.408790 systemd-udevd[401]: Using default interface naming scheme 'v252'. Dec 13 01:56:29.412424 systemd[1]: Started systemd-udevd.service. Dec 13 01:56:29.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:29.415561 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 01:56:29.426898 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Dec 13 01:56:29.452940 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 01:56:29.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:29.455350 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:56:29.485959 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:56:29.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:29.515211 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:56:29.521108 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:56:29.521121 kernel: GPT:9289727 != 19775487 Dec 13 01:56:29.521129 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:56:29.521138 kernel: GPT:9289727 != 19775487 Dec 13 01:56:29.521146 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:56:29.521154 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:56:29.523466 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:56:29.535914 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:56:29.535939 kernel: AES CTR mode by8 optimization enabled Dec 13 01:56:29.547467 kernel: libata version 3.00 loaded. Dec 13 01:56:29.551959 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Dec 13 01:56:29.551875 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 01:56:29.555474 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 01:56:29.557538 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 01:56:29.570305 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 01:56:29.573175 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:56:29.593571 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:56:29.593588 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:56:29.593829 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:56:29.593976 kernel: scsi host0: ahci Dec 13 01:56:29.594072 kernel: scsi host1: ahci Dec 13 01:56:29.594155 kernel: scsi host2: ahci Dec 13 01:56:29.594232 kernel: scsi host3: ahci Dec 13 01:56:29.594310 kernel: scsi host4: ahci Dec 13 01:56:29.594387 kernel: scsi host5: ahci Dec 13 01:56:29.594487 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:56:29.594498 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:56:29.594507 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:56:29.594516 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:56:29.594524 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:56:29.594533 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:56:29.594542 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:56:29.581764 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:56:29.584316 systemd[1]: Starting disk-uuid.service... Dec 13 01:56:29.597403 disk-uuid[538]: Primary Header is updated. Dec 13 01:56:29.597403 disk-uuid[538]: Secondary Entries is updated. Dec 13 01:56:29.597403 disk-uuid[538]: Secondary Header is updated. Dec 13 01:56:29.602875 kernel: GPT:disk_guids don't match. Dec 13 01:56:29.602891 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:56:29.602900 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:56:29.602910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:56:29.898762 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:56:29.898821 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:56:29.899476 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:56:29.900465 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:56:29.901474 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:56:29.902752 kernel: ata3.00: applying bridge limits Dec 13 01:56:29.903465 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:56:29.904465 kernel: ata3.00: configured for UDMA/100 Dec 13 01:56:29.910463 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:56:29.910488 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:56:29.940499 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:56:29.958091 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:56:29.958109 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:56:30.614965 disk-uuid[539]: The operation has completed successfully. Dec 13 01:56:30.616414 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:56:30.640612 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:56:30.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.640689 systemd[1]: Finished disk-uuid.service. Dec 13 01:56:30.642426 systemd[1]: Starting verity-setup.service... Dec 13 01:56:30.659466 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:56:30.678540 systemd[1]: Found device dev-mapper-usr.device. Dec 13 01:56:30.679958 systemd[1]: Mounting sysusr-usr.mount... Dec 13 01:56:30.682800 systemd[1]: Finished verity-setup.service. Dec 13 01:56:30.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.739469 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 01:56:30.739653 systemd[1]: Mounted sysusr-usr.mount. Dec 13 01:56:30.740578 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 01:56:30.741312 systemd[1]: Starting ignition-setup.service... Dec 13 01:56:30.744376 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 01:56:30.762029 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:56:30.762054 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:56:30.762067 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:56:30.771034 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:56:30.799950 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 01:56:30.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.804000 audit: BPF prog-id=9 op=LOAD Dec 13 01:56:30.805115 systemd[1]: Starting systemd-networkd.service... Dec 13 01:56:30.823764 systemd-networkd[718]: lo: Link UP Dec 13 01:56:30.823772 systemd-networkd[718]: lo: Gained carrier Dec 13 01:56:30.824137 systemd-networkd[718]: Enumeration completed Dec 13 01:56:30.824316 systemd-networkd[718]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:56:30.824817 systemd[1]: Started systemd-networkd.service. Dec 13 01:56:30.825833 systemd-networkd[718]: eth0: Link UP Dec 13 01:56:30.825841 systemd-networkd[718]: eth0: Gained carrier Dec 13 01:56:30.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.830069 systemd[1]: Reached target network.target. Dec 13 01:56:30.832226 systemd[1]: Starting iscsiuio.service... Dec 13 01:56:30.835659 systemd[1]: Started iscsiuio.service. Dec 13 01:56:30.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.837939 systemd[1]: Starting iscsid.service... Dec 13 01:56:30.840579 iscsid[723]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:56:30.840579 iscsid[723]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 01:56:30.840579 iscsid[723]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 01:56:30.840579 iscsid[723]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 01:56:30.840579 iscsid[723]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:56:30.840579 iscsid[723]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 01:56:30.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.842021 systemd[1]: Started iscsid.service. Dec 13 01:56:30.843540 systemd-networkd[718]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:56:30.854914 systemd[1]: Starting dracut-initqueue.service... Dec 13 01:56:30.861573 systemd[1]: Finished ignition-setup.service. Dec 13 01:56:30.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.864155 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 01:56:30.868655 systemd[1]: Finished dracut-initqueue.service. Dec 13 01:56:30.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.871757 systemd[1]: Reached target remote-fs-pre.target. Dec 13 01:56:30.874561 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:56:30.877091 systemd[1]: Reached target remote-fs.target. Dec 13 01:56:30.879918 systemd[1]: Starting dracut-pre-mount.service... Dec 13 01:56:30.887731 systemd[1]: Finished dracut-pre-mount.service. Dec 13 01:56:30.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.905877 ignition[733]: Ignition 2.14.0 Dec 13 01:56:30.905895 ignition[733]: Stage: fetch-offline Dec 13 01:56:30.905954 ignition[733]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:56:30.905966 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:56:30.906101 ignition[733]: parsed url from cmdline: "" Dec 13 01:56:30.906105 ignition[733]: no config URL provided Dec 13 01:56:30.906111 ignition[733]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:56:30.906120 ignition[733]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:56:30.906143 ignition[733]: op(1): [started] loading QEMU firmware config module Dec 13 01:56:30.906149 ignition[733]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:56:30.910733 ignition[733]: op(1): [finished] loading QEMU firmware config module Dec 13 01:56:30.955588 ignition[733]: parsing config with SHA512: fed785da4eeffed82695a9f257a4be61e80a4c016ce5326ff12fa2d7e31cec3e523b0beb061ed98e0d75433f6e5390f48c4185a559fa10140bc2f9144206501e Dec 13 01:56:30.962621 unknown[733]: fetched base config from "system" Dec 13 01:56:30.962906 unknown[733]: fetched user config from "qemu" Dec 13 01:56:30.963504 ignition[733]: fetch-offline: fetch-offline passed Dec 13 01:56:30.963564 ignition[733]: Ignition finished successfully Dec 13 01:56:30.966314 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 01:56:30.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.968135 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:56:30.970403 systemd[1]: Starting ignition-kargs.service... Dec 13 01:56:30.979829 ignition[746]: Ignition 2.14.0 Dec 13 01:56:30.979838 ignition[746]: Stage: kargs Dec 13 01:56:30.979915 ignition[746]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:56:30.979925 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:56:30.980708 ignition[746]: kargs: kargs passed Dec 13 01:56:30.980750 ignition[746]: Ignition finished successfully Dec 13 01:56:30.985105 systemd[1]: Finished ignition-kargs.service. Dec 13 01:56:30.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.987220 systemd[1]: Starting ignition-disks.service... Dec 13 01:56:30.993289 ignition[752]: Ignition 2.14.0 Dec 13 01:56:30.993298 ignition[752]: Stage: disks Dec 13 01:56:30.993376 ignition[752]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:56:30.993384 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:56:30.994526 ignition[752]: disks: disks passed Dec 13 01:56:30.994562 ignition[752]: Ignition finished successfully Dec 13 01:56:30.998464 systemd[1]: Finished ignition-disks.service. Dec 13 01:56:30.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:30.999429 systemd[1]: Reached target initrd-root-device.target. Dec 13 01:56:31.000924 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:56:31.001754 systemd[1]: Reached target local-fs.target. Dec 13 01:56:31.002529 systemd[1]: Reached target sysinit.target. Dec 13 01:56:31.004153 systemd[1]: Reached target basic.target. Dec 13 01:56:31.005644 systemd[1]: Starting systemd-fsck-root.service... Dec 13 01:56:31.017418 systemd-fsck[760]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 01:56:31.022722 systemd[1]: Finished systemd-fsck-root.service. Dec 13 01:56:31.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:31.025418 systemd[1]: Mounting sysroot.mount... Dec 13 01:56:31.031472 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 01:56:31.031924 systemd[1]: Mounted sysroot.mount. Dec 13 01:56:31.033333 systemd[1]: Reached target initrd-root-fs.target. Dec 13 01:56:31.035702 systemd[1]: Mounting sysroot-usr.mount... Dec 13 01:56:31.037360 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 01:56:31.037394 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:56:31.037413 systemd[1]: Reached target ignition-diskful.target. Dec 13 01:56:31.042535 systemd[1]: Mounted sysroot-usr.mount. Dec 13 01:56:31.044528 systemd[1]: Starting initrd-setup-root.service... Dec 13 01:56:31.048276 initrd-setup-root[770]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:56:31.051745 initrd-setup-root[778]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:56:31.055362 initrd-setup-root[786]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:56:31.058900 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:56:31.085078 systemd[1]: Finished initrd-setup-root.service. Dec 13 01:56:31.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:31.087622 systemd[1]: Starting ignition-mount.service... Dec 13 01:56:31.088336 systemd[1]: Starting sysroot-boot.service... Dec 13 01:56:31.095884 bash[812]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 01:56:31.103400 ignition[813]: INFO : Ignition 2.14.0 Dec 13 01:56:31.104366 ignition[813]: INFO : Stage: mount Dec 13 01:56:31.104366 ignition[813]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:56:31.104366 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:56:31.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:31.105374 systemd[1]: Finished sysroot-boot.service. Dec 13 01:56:31.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:31.110139 ignition[813]: INFO : mount: mount passed Dec 13 01:56:31.110139 ignition[813]: INFO : Ignition finished successfully Dec 13 01:56:31.107499 systemd[1]: Finished ignition-mount.service. Dec 13 01:56:31.689411 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:56:31.694463 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (822) Dec 13 01:56:31.696664 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:56:31.696685 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:56:31.696708 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:56:31.700347 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:56:31.702083 systemd[1]: Starting ignition-files.service... Dec 13 01:56:31.716578 ignition[842]: INFO : Ignition 2.14.0 Dec 13 01:56:31.716578 ignition[842]: INFO : Stage: files Dec 13 01:56:31.718178 ignition[842]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:56:31.718178 ignition[842]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:56:31.721134 ignition[842]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:56:31.722452 ignition[842]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:56:31.722452 ignition[842]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:56:31.725527 ignition[842]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:56:31.726910 ignition[842]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:56:31.728656 unknown[842]: wrote ssh authorized keys file for user: core Dec 13 01:56:31.729671 ignition[842]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:56:31.731367 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:56:31.733236 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:56:31.820910 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:56:31.904275 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:56:31.906286 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:56:31.908010 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:56:32.275865 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:56:32.374341 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:56:32.376287 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:56:32.491629 systemd-networkd[718]: eth0: Gained IPv6LL Dec 13 01:56:32.803532 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:56:33.124523 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:56:33.124523 ignition[842]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:56:33.128564 ignition[842]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:56:33.128564 ignition[842]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:56:33.128564 ignition[842]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:56:33.128564 ignition[842]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:56:33.128564 ignition[842]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:56:33.128564 ignition[842]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:56:33.128564 ignition[842]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:56:33.128564 ignition[842]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:56:33.128564 ignition[842]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:56:33.128564 ignition[842]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:56:33.128564 ignition[842]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:56:33.158487 ignition[842]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:56:33.160096 ignition[842]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:56:33.160096 ignition[842]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:56:33.160096 ignition[842]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:56:33.160096 ignition[842]: INFO : files: files passed Dec 13 01:56:33.160096 ignition[842]: INFO : Ignition finished successfully Dec 13 01:56:33.167192 systemd[1]: Finished ignition-files.service. Dec 13 01:56:33.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.169013 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 01:56:33.170007 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 01:56:33.170708 systemd[1]: Starting ignition-quench.service... Dec 13 01:56:33.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.173005 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:56:33.173092 systemd[1]: Finished ignition-quench.service. Dec 13 01:56:33.179114 initrd-setup-root-after-ignition[867]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 01:56:33.182014 initrd-setup-root-after-ignition[869]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:56:33.182610 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 01:56:33.190353 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 13 01:56:33.190377 kernel: audit: type=1130 audit(1734054993.184:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.184683 systemd[1]: Reached target ignition-complete.target. Dec 13 01:56:33.190988 systemd[1]: Starting initrd-parse-etc.service... Dec 13 01:56:33.203435 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:56:33.203581 systemd[1]: Finished initrd-parse-etc.service. Dec 13 01:56:33.212804 kernel: audit: type=1130 audit(1734054993.204:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.212824 kernel: audit: type=1131 audit(1734054993.205:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.205592 systemd[1]: Reached target initrd-fs.target. Dec 13 01:56:33.212841 systemd[1]: Reached target initrd.target. Dec 13 01:56:33.213708 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 01:56:33.214464 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 01:56:33.223338 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 01:56:33.228524 kernel: audit: type=1130 audit(1734054993.222:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.224032 systemd[1]: Starting initrd-cleanup.service... Dec 13 01:56:33.233427 systemd[1]: Stopped target nss-lookup.target. Dec 13 01:56:33.235160 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 01:56:33.236218 systemd[1]: Stopped target timers.target. Dec 13 01:56:33.237799 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:56:33.244087 kernel: audit: type=1131 audit(1734054993.239:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.237928 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 01:56:33.239615 systemd[1]: Stopped target initrd.target. Dec 13 01:56:33.244215 systemd[1]: Stopped target basic.target. Dec 13 01:56:33.245776 systemd[1]: Stopped target ignition-complete.target. Dec 13 01:56:33.247357 systemd[1]: Stopped target ignition-diskful.target. Dec 13 01:56:33.248942 systemd[1]: Stopped target initrd-root-device.target. Dec 13 01:56:33.250687 systemd[1]: Stopped target remote-fs.target. Dec 13 01:56:33.252308 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 01:56:33.254004 systemd[1]: Stopped target sysinit.target. Dec 13 01:56:33.255537 systemd[1]: Stopped target local-fs.target. Dec 13 01:56:33.257120 systemd[1]: Stopped target local-fs-pre.target. Dec 13 01:56:33.258675 systemd[1]: Stopped target swap.target. Dec 13 01:56:33.265994 kernel: audit: type=1131 audit(1734054993.261:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.260117 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:56:33.260267 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 01:56:33.272201 kernel: audit: type=1131 audit(1734054993.267:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.261856 systemd[1]: Stopped target cryptsetup.target. Dec 13 01:56:33.276598 kernel: audit: type=1131 audit(1734054993.271:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.266073 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:56:33.266200 systemd[1]: Stopped dracut-initqueue.service. Dec 13 01:56:33.267980 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:56:33.268109 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 01:56:33.272396 systemd[1]: Stopped target paths.target. Dec 13 01:56:33.276725 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:56:33.278501 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 01:56:33.280674 systemd[1]: Stopped target slices.target. Dec 13 01:56:33.291835 kernel: audit: type=1131 audit(1734054993.286:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.282310 systemd[1]: Stopped target sockets.target. Dec 13 01:56:33.296047 kernel: audit: type=1131 audit(1734054993.291:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.283813 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:56:33.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.299586 ignition[882]: INFO : Ignition 2.14.0 Dec 13 01:56:33.299586 ignition[882]: INFO : Stage: umount Dec 13 01:56:33.299586 ignition[882]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:56:33.299586 ignition[882]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:56:33.299586 ignition[882]: INFO : umount: umount passed Dec 13 01:56:33.299586 ignition[882]: INFO : Ignition finished successfully Dec 13 01:56:33.283886 systemd[1]: Closed iscsid.socket. Dec 13 01:56:33.285599 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:56:33.285700 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 01:56:33.287439 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:56:33.287533 systemd[1]: Stopped ignition-files.service. Dec 13 01:56:33.292593 systemd[1]: Stopping ignition-mount.service... Dec 13 01:56:33.296258 systemd[1]: Stopping iscsiuio.service... Dec 13 01:56:33.297593 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:56:33.297742 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 01:56:33.300481 systemd[1]: Stopping sysroot-boot.service... Dec 13 01:56:33.302755 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:56:33.306248 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 01:56:33.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.315977 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:56:33.317013 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 01:56:33.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.320982 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:56:33.322547 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 01:56:33.323478 systemd[1]: Stopped iscsiuio.service. Dec 13 01:56:33.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.325409 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:56:33.325499 systemd[1]: Stopped ignition-mount.service. Dec 13 01:56:33.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.328212 systemd[1]: Stopped target network.target. Dec 13 01:56:33.329779 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:56:33.329806 systemd[1]: Closed iscsiuio.socket. Dec 13 01:56:33.331933 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:56:33.331971 systemd[1]: Stopped ignition-disks.service. Dec 13 01:56:33.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.334291 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:56:33.334322 systemd[1]: Stopped ignition-kargs.service. Dec 13 01:56:33.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.336754 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:56:33.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.336788 systemd[1]: Stopped ignition-setup.service. Dec 13 01:56:33.339335 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:56:33.341063 systemd[1]: Stopping systemd-resolved.service... Dec 13 01:56:33.342980 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:56:33.343950 systemd[1]: Finished initrd-cleanup.service. Dec 13 01:56:33.345479 systemd-networkd[718]: eth0: DHCPv6 lease lost Dec 13 01:56:33.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.346493 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:56:33.346577 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:56:33.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.349631 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:56:33.350621 systemd[1]: Stopped systemd-resolved.service. Dec 13 01:56:33.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.353000 audit: BPF prog-id=9 op=UNLOAD Dec 13 01:56:33.353506 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:56:33.353531 systemd[1]: Closed systemd-networkd.socket. Dec 13 01:56:33.355000 audit: BPF prog-id=6 op=UNLOAD Dec 13 01:56:33.356507 systemd[1]: Stopping network-cleanup.service... Dec 13 01:56:33.358091 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:56:33.358131 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 01:56:33.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.360947 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:56:33.360981 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:56:33.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.363504 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:56:33.363541 systemd[1]: Stopped systemd-modules-load.service. Dec 13 01:56:33.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.366236 systemd[1]: Stopping systemd-udevd.service... Dec 13 01:56:33.368428 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:56:33.370625 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:56:33.371602 systemd[1]: Stopped network-cleanup.service. Dec 13 01:56:33.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.374906 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:56:33.375940 systemd[1]: Stopped systemd-udevd.service. Dec 13 01:56:33.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.377818 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:56:33.377851 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 01:56:33.380355 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:56:33.380384 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 01:56:33.382894 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:56:33.382929 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 01:56:33.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.385337 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:56:33.385365 systemd[1]: Stopped dracut-cmdline.service. Dec 13 01:56:33.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.387784 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:56:33.387816 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 01:56:33.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.390838 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 01:56:33.391765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:56:33.392629 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 01:56:33.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.415895 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:56:33.416973 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 01:56:33.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.519365 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:56:33.520370 systemd[1]: Stopped sysroot-boot.service. Dec 13 01:56:33.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.522001 systemd[1]: Reached target initrd-switch-root.target. Dec 13 01:56:33.523817 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:56:33.523853 systemd[1]: Stopped initrd-setup-root.service. Dec 13 01:56:33.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.527344 systemd[1]: Starting initrd-switch-root.service... Dec 13 01:56:33.543407 systemd[1]: Switching root. Dec 13 01:56:33.561172 iscsid[723]: iscsid shutting down. Dec 13 01:56:33.561871 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Dec 13 01:56:33.561906 systemd-journald[198]: Journal stopped Dec 13 01:56:35.999277 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 01:56:35.999325 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 01:56:35.999337 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 01:56:35.999346 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:56:35.999356 kernel: SELinux: policy capability open_perms=1 Dec 13 01:56:35.999365 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:56:35.999374 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:56:35.999383 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:56:35.999392 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:56:35.999401 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:56:35.999411 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:56:35.999426 systemd[1]: Successfully loaded SELinux policy in 38.083ms. Dec 13 01:56:35.999441 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.411ms. Dec 13 01:56:35.999472 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:56:35.999483 systemd[1]: Detected virtualization kvm. Dec 13 01:56:35.999493 systemd[1]: Detected architecture x86-64. Dec 13 01:56:35.999502 systemd[1]: Detected first boot. Dec 13 01:56:35.999512 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:56:35.999524 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 01:56:35.999535 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:56:35.999545 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:56:35.999557 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:56:35.999568 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:56:35.999591 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 01:56:35.999603 systemd[1]: Stopped iscsid.service. Dec 13 01:56:35.999614 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:56:35.999624 systemd[1]: Stopped initrd-switch-root.service. Dec 13 01:56:35.999634 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:56:35.999644 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 01:56:35.999654 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 01:56:35.999664 systemd[1]: Created slice system-getty.slice. Dec 13 01:56:35.999675 systemd[1]: Created slice system-modprobe.slice. Dec 13 01:56:35.999685 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 01:56:35.999695 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 01:56:35.999706 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 01:56:35.999716 systemd[1]: Created slice user.slice. Dec 13 01:56:35.999726 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:56:35.999737 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 01:56:35.999746 systemd[1]: Set up automount boot.automount. Dec 13 01:56:35.999757 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 01:56:35.999768 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 01:56:35.999778 systemd[1]: Stopped target initrd-fs.target. Dec 13 01:56:35.999788 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 01:56:35.999798 systemd[1]: Reached target integritysetup.target. Dec 13 01:56:35.999808 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:56:35.999819 systemd[1]: Reached target remote-fs.target. Dec 13 01:56:35.999828 systemd[1]: Reached target slices.target. Dec 13 01:56:35.999838 systemd[1]: Reached target swap.target. Dec 13 01:56:35.999848 systemd[1]: Reached target torcx.target. Dec 13 01:56:35.999859 systemd[1]: Reached target veritysetup.target. Dec 13 01:56:35.999869 systemd[1]: Listening on systemd-coredump.socket. Dec 13 01:56:35.999878 systemd[1]: Listening on systemd-initctl.socket. Dec 13 01:56:35.999888 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:56:35.999898 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:56:35.999908 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:56:35.999918 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 01:56:35.999928 systemd[1]: Mounting dev-hugepages.mount... Dec 13 01:56:35.999940 systemd[1]: Mounting dev-mqueue.mount... Dec 13 01:56:35.999951 systemd[1]: Mounting media.mount... Dec 13 01:56:35.999961 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:56:35.999971 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 01:56:35.999981 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 01:56:35.999991 systemd[1]: Mounting tmp.mount... Dec 13 01:56:36.000000 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 01:56:36.000011 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:56:36.000020 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:56:36.000030 systemd[1]: Starting modprobe@configfs.service... Dec 13 01:56:36.000041 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:56:36.000050 systemd[1]: Starting modprobe@drm.service... Dec 13 01:56:36.000061 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:56:36.000070 systemd[1]: Starting modprobe@fuse.service... Dec 13 01:56:36.000080 systemd[1]: Starting modprobe@loop.service... Dec 13 01:56:36.000094 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:56:36.000105 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:56:36.000115 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 01:56:36.000125 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:56:36.000136 kernel: loop: module loaded Dec 13 01:56:36.000148 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:56:36.000159 systemd[1]: Stopped systemd-journald.service. Dec 13 01:56:36.000170 kernel: fuse: init (API version 7.34) Dec 13 01:56:36.000180 systemd[1]: Starting systemd-journald.service... Dec 13 01:56:36.000190 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:56:36.000199 systemd[1]: Starting systemd-network-generator.service... Dec 13 01:56:36.000209 systemd[1]: Starting systemd-remount-fs.service... Dec 13 01:56:36.000219 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:56:36.000230 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:56:36.000240 systemd[1]: Stopped verity-setup.service. Dec 13 01:56:36.000250 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:56:36.000262 systemd-journald[997]: Journal started Dec 13 01:56:36.000298 systemd-journald[997]: Runtime Journal (/run/log/journal/18816da96b844a3aa09b5e2a89abbd54) is 6.0M, max 48.4M, 42.4M free. Dec 13 01:56:33.617000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:56:33.761000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:56:33.761000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:56:33.762000 audit: BPF prog-id=10 op=LOAD Dec 13 01:56:33.762000 audit: BPF prog-id=10 op=UNLOAD Dec 13 01:56:33.762000 audit: BPF prog-id=11 op=LOAD Dec 13 01:56:33.762000 audit: BPF prog-id=11 op=UNLOAD Dec 13 01:56:33.793000 audit[916]: AVC avc: denied { associate } for pid=916 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 01:56:33.793000 audit[916]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=899 pid=916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:56:33.793000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:56:33.795000 audit[916]: AVC avc: denied { associate } for pid=916 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 01:56:33.795000 audit[916]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079b9 a2=1ed a3=0 items=2 ppid=899 pid=916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:56:33.795000 audit: CWD cwd="/" Dec 13 01:56:33.795000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:33.795000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:33.795000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:56:35.864000 audit: BPF prog-id=12 op=LOAD Dec 13 01:56:35.864000 audit: BPF prog-id=3 op=UNLOAD Dec 13 01:56:35.864000 audit: BPF prog-id=13 op=LOAD Dec 13 01:56:35.864000 audit: BPF prog-id=14 op=LOAD Dec 13 01:56:35.864000 audit: BPF prog-id=4 op=UNLOAD Dec 13 01:56:35.864000 audit: BPF prog-id=5 op=UNLOAD Dec 13 01:56:35.865000 audit: BPF prog-id=15 op=LOAD Dec 13 01:56:35.865000 audit: BPF prog-id=12 op=UNLOAD Dec 13 01:56:35.865000 audit: BPF prog-id=16 op=LOAD Dec 13 01:56:35.865000 audit: BPF prog-id=17 op=LOAD Dec 13 01:56:35.865000 audit: BPF prog-id=13 op=UNLOAD Dec 13 01:56:35.865000 audit: BPF prog-id=14 op=UNLOAD Dec 13 01:56:35.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:35.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:35.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:35.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:35.886000 audit: BPF prog-id=15 op=UNLOAD Dec 13 01:56:35.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:35.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:35.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:35.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:35.976000 audit: BPF prog-id=18 op=LOAD Dec 13 01:56:35.976000 audit: BPF prog-id=19 op=LOAD Dec 13 01:56:35.976000 audit: BPF prog-id=20 op=LOAD Dec 13 01:56:35.976000 audit: BPF prog-id=16 op=UNLOAD Dec 13 01:56:35.976000 audit: BPF prog-id=17 op=UNLOAD Dec 13 01:56:35.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:35.997000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 01:56:35.997000 audit[997]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe63d08120 a2=4000 a3=7ffe63d081bc items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:56:35.997000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 01:56:33.792276 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:56:35.863125 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:56:33.792513 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:56:35.863137 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 01:56:33.792537 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:56:35.866674 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:56:33.792572 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 01:56:33.792584 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 01:56:33.792620 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 01:56:33.792646 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 01:56:33.792893 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 01:56:33.792936 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:56:33.792952 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:56:33.793650 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 01:56:33.793695 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 01:56:33.793719 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 01:56:36.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:33.793740 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 01:56:36.002464 systemd[1]: Started systemd-journald.service. Dec 13 01:56:33.793763 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 01:56:33.793784 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 01:56:35.585089 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:56:35.585323 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:56:36.002712 systemd[1]: Mounted dev-hugepages.mount. Dec 13 01:56:35.585407 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:56:35.585556 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:56:35.585608 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 01:56:35.585659 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2024-12-13T01:56:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 01:56:36.003640 systemd[1]: Mounted dev-mqueue.mount. Dec 13 01:56:36.004458 systemd[1]: Mounted media.mount. Dec 13 01:56:36.005214 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 01:56:36.006059 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 01:56:36.006931 systemd[1]: Mounted tmp.mount. Dec 13 01:56:36.007824 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 01:56:36.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.008958 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:56:36.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.009976 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:56:36.010129 systemd[1]: Finished modprobe@configfs.service. Dec 13 01:56:36.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.011156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:56:36.011318 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:56:36.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.012598 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:56:36.012833 systemd[1]: Finished modprobe@drm.service. Dec 13 01:56:36.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.013903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:56:36.014097 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:56:36.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.015240 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:56:36.015415 systemd[1]: Finished modprobe@fuse.service. Dec 13 01:56:36.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.016494 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:56:36.016700 systemd[1]: Finished modprobe@loop.service. Dec 13 01:56:36.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.017875 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:56:36.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.019100 systemd[1]: Finished systemd-network-generator.service. Dec 13 01:56:36.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.020350 systemd[1]: Finished systemd-remount-fs.service. Dec 13 01:56:36.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.021733 systemd[1]: Reached target network-pre.target. Dec 13 01:56:36.024016 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 01:56:36.025919 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 01:56:36.026703 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:56:36.028999 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 01:56:36.031195 systemd[1]: Starting systemd-journal-flush.service... Dec 13 01:56:36.032114 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:56:36.042618 systemd-journald[997]: Time spent on flushing to /var/log/journal/18816da96b844a3aa09b5e2a89abbd54 is 13.466ms for 1164 entries. Dec 13 01:56:36.042618 systemd-journald[997]: System Journal (/var/log/journal/18816da96b844a3aa09b5e2a89abbd54) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:56:36.328147 systemd-journald[997]: Received client request to flush runtime journal. Dec 13 01:56:36.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.035568 systemd[1]: Starting systemd-random-seed.service... Dec 13 01:56:36.036457 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:56:36.037249 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:56:36.038717 systemd[1]: Starting systemd-sysusers.service... Dec 13 01:56:36.042232 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:56:36.329781 udevadm[1019]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:56:36.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.044603 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 01:56:36.045587 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 01:56:36.047563 systemd[1]: Starting systemd-udev-settle.service... Dec 13 01:56:36.093988 systemd[1]: Finished systemd-sysusers.service. Dec 13 01:56:36.095236 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:56:36.261311 systemd[1]: Finished systemd-random-seed.service. Dec 13 01:56:36.262344 systemd[1]: Reached target first-boot-complete.target. Dec 13 01:56:36.329018 systemd[1]: Finished systemd-journal-flush.service. Dec 13 01:56:36.563257 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 01:56:36.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.564000 audit: BPF prog-id=21 op=LOAD Dec 13 01:56:36.564000 audit: BPF prog-id=22 op=LOAD Dec 13 01:56:36.564000 audit: BPF prog-id=7 op=UNLOAD Dec 13 01:56:36.564000 audit: BPF prog-id=8 op=UNLOAD Dec 13 01:56:36.565312 systemd[1]: Starting systemd-udevd.service... Dec 13 01:56:36.579407 systemd-udevd[1022]: Using default interface naming scheme 'v252'. Dec 13 01:56:36.590960 systemd[1]: Started systemd-udevd.service. Dec 13 01:56:36.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.592000 audit: BPF prog-id=23 op=LOAD Dec 13 01:56:36.594570 systemd[1]: Starting systemd-networkd.service... Dec 13 01:56:36.597000 audit: BPF prog-id=24 op=LOAD Dec 13 01:56:36.597000 audit: BPF prog-id=25 op=LOAD Dec 13 01:56:36.597000 audit: BPF prog-id=26 op=LOAD Dec 13 01:56:36.598957 systemd[1]: Starting systemd-userdbd.service... Dec 13 01:56:36.607362 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 01:56:36.630899 systemd[1]: Started systemd-userdbd.service. Dec 13 01:56:36.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.642474 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:56:36.642688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:56:36.650469 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:56:36.661000 audit[1034]: AVC avc: denied { confidentiality } for pid=1034 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:56:36.676603 systemd-networkd[1032]: lo: Link UP Dec 13 01:56:36.676615 systemd-networkd[1032]: lo: Gained carrier Dec 13 01:56:36.676969 systemd-networkd[1032]: Enumeration completed Dec 13 01:56:36.677056 systemd[1]: Started systemd-networkd.service. Dec 13 01:56:36.677059 systemd-networkd[1032]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:56:36.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.678639 systemd-networkd[1032]: eth0: Link UP Dec 13 01:56:36.678645 systemd-networkd[1032]: eth0: Gained carrier Dec 13 01:56:36.661000 audit[1034]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56199ad78140 a1=337fc a2=7fc5d321cbc5 a3=5 items=110 ppid=1022 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:56:36.661000 audit: CWD cwd="/" Dec 13 01:56:36.661000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=1 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=2 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=3 name=(null) inode=13305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=4 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=5 name=(null) inode=13306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=6 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=7 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=8 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=9 name=(null) inode=13308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=10 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=11 name=(null) inode=13309 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=12 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=13 name=(null) inode=13310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=14 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=15 name=(null) inode=13311 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=16 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=17 name=(null) inode=13312 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=18 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=19 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=20 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=21 name=(null) inode=15362 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=22 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=23 name=(null) inode=15363 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=24 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=25 name=(null) inode=15364 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=26 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=27 name=(null) inode=15365 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=28 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=29 name=(null) inode=15366 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=30 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=31 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=32 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=33 name=(null) inode=15368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=34 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=35 name=(null) inode=15369 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=36 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=37 name=(null) inode=15370 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=38 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=39 name=(null) inode=15371 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=40 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=41 name=(null) inode=15372 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=42 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=43 name=(null) inode=15373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=44 name=(null) inode=15373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=45 name=(null) inode=15374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=46 name=(null) inode=15373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=47 name=(null) inode=15375 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=48 name=(null) inode=15373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=49 name=(null) inode=15376 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=50 name=(null) inode=15373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=51 name=(null) inode=15377 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=52 name=(null) inode=15373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=53 name=(null) inode=15378 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=55 name=(null) inode=15379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=56 name=(null) inode=15379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=57 name=(null) inode=15380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=58 name=(null) inode=15379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=59 name=(null) inode=15381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=60 name=(null) inode=15379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=61 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=62 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=63 name=(null) inode=15383 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=64 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=65 name=(null) inode=15384 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=66 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=67 name=(null) inode=15385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=68 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=69 name=(null) inode=15386 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=70 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=71 name=(null) inode=15387 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=72 name=(null) inode=15379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=73 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=74 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=75 name=(null) inode=15389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=76 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=77 name=(null) inode=15390 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=78 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=79 name=(null) inode=15391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=80 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=81 name=(null) inode=15392 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=82 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=83 name=(null) inode=15393 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=84 name=(null) inode=15379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=85 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=86 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=87 name=(null) inode=15395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=88 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=89 name=(null) inode=15396 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=90 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=91 name=(null) inode=15397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=92 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=93 name=(null) inode=15398 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=94 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=95 name=(null) inode=15399 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=96 name=(null) inode=15379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=97 name=(null) inode=15400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=98 name=(null) inode=15400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=99 name=(null) inode=15401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=100 name=(null) inode=15400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=101 name=(null) inode=15402 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=102 name=(null) inode=15400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=103 name=(null) inode=15403 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=104 name=(null) inode=15400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=105 name=(null) inode=15404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=106 name=(null) inode=15400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=107 name=(null) inode=15405 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PATH item=109 name=(null) inode=1765 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:56:36.661000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 01:56:36.690575 systemd-networkd[1032]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:56:36.692476 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:56:36.698086 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:56:36.702667 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:56:36.702775 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:56:36.702882 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:56:36.706463 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:56:36.744077 kernel: kvm: Nested Virtualization enabled Dec 13 01:56:36.744154 kernel: SVM: kvm: Nested Paging enabled Dec 13 01:56:36.744168 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 01:56:36.744180 kernel: SVM: Virtual GIF supported Dec 13 01:56:36.758460 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:56:36.784796 systemd[1]: Finished systemd-udev-settle.service. Dec 13 01:56:36.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.786781 systemd[1]: Starting lvm2-activation-early.service... Dec 13 01:56:36.793942 lvm[1058]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:56:36.818343 systemd[1]: Finished lvm2-activation-early.service. Dec 13 01:56:36.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.819395 systemd[1]: Reached target cryptsetup.target. Dec 13 01:56:36.821174 systemd[1]: Starting lvm2-activation.service... Dec 13 01:56:36.824470 lvm[1059]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:56:36.851714 systemd[1]: Finished lvm2-activation.service. Dec 13 01:56:36.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.852667 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:56:36.853520 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:56:36.853542 systemd[1]: Reached target local-fs.target. Dec 13 01:56:36.854352 systemd[1]: Reached target machines.target. Dec 13 01:56:36.856101 systemd[1]: Starting ldconfig.service... Dec 13 01:56:36.857048 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:56:36.857085 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:56:36.857888 systemd[1]: Starting systemd-boot-update.service... Dec 13 01:56:36.859340 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 01:56:36.861148 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 01:56:36.863341 systemd[1]: Starting systemd-sysext.service... Dec 13 01:56:36.864610 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1061 (bootctl) Dec 13 01:56:36.865501 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 01:56:36.872988 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 01:56:36.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.876773 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 01:56:36.882299 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 01:56:36.882455 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 01:56:36.891467 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 01:56:36.899024 systemd-fsck[1068]: fsck.fat 4.2 (2021-01-31) Dec 13 01:56:36.899024 systemd-fsck[1068]: /dev/vda1: 790 files, 119311/258078 clusters Dec 13 01:56:36.900159 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 01:56:36.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:36.903036 systemd[1]: Mounting boot.mount... Dec 13 01:56:36.919924 systemd[1]: Mounted boot.mount. Dec 13 01:56:36.931462 systemd[1]: Finished systemd-boot-update.service. Dec 13 01:56:36.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.505784 ldconfig[1060]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:56:37.512482 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:56:37.521671 systemd[1]: Finished ldconfig.service. Dec 13 01:56:37.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.529474 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 01:56:37.536955 (sd-sysext)[1074]: Using extensions 'kubernetes'. Dec 13 01:56:37.537222 (sd-sysext)[1074]: Merged extensions into '/usr'. Dec 13 01:56:37.544690 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:56:37.545136 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 01:56:37.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.549040 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:56:37.550175 systemd[1]: Mounting usr-share-oem.mount... Dec 13 01:56:37.551076 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:56:37.552147 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:56:37.553756 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:56:37.555569 systemd[1]: Starting modprobe@loop.service... Dec 13 01:56:37.556355 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:56:37.556483 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:56:37.556619 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:56:37.559249 systemd[1]: Mounted usr-share-oem.mount. Dec 13 01:56:37.560382 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:56:37.560495 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:56:37.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.561750 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:56:37.561838 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:56:37.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.563086 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:56:37.563169 systemd[1]: Finished modprobe@loop.service. Dec 13 01:56:37.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.564407 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:56:37.564571 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:56:37.565390 systemd[1]: Finished systemd-sysext.service. Dec 13 01:56:37.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.567270 systemd[1]: Starting ensure-sysext.service... Dec 13 01:56:37.568902 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 01:56:37.572953 systemd[1]: Reloading. Dec 13 01:56:37.579406 systemd-tmpfiles[1082]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 01:56:37.581555 systemd-tmpfiles[1082]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:56:37.584524 systemd-tmpfiles[1082]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:56:37.628591 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-12-13T01:56:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:56:37.628617 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-12-13T01:56:37Z" level=info msg="torcx already run" Dec 13 01:56:37.699061 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:56:37.699075 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:56:37.715512 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:56:37.765000 audit: BPF prog-id=27 op=LOAD Dec 13 01:56:37.765000 audit: BPF prog-id=28 op=LOAD Dec 13 01:56:37.765000 audit: BPF prog-id=21 op=UNLOAD Dec 13 01:56:37.765000 audit: BPF prog-id=22 op=UNLOAD Dec 13 01:56:37.766000 audit: BPF prog-id=29 op=LOAD Dec 13 01:56:37.766000 audit: BPF prog-id=23 op=UNLOAD Dec 13 01:56:37.767000 audit: BPF prog-id=30 op=LOAD Dec 13 01:56:37.767000 audit: BPF prog-id=18 op=UNLOAD Dec 13 01:56:37.767000 audit: BPF prog-id=31 op=LOAD Dec 13 01:56:37.767000 audit: BPF prog-id=32 op=LOAD Dec 13 01:56:37.767000 audit: BPF prog-id=19 op=UNLOAD Dec 13 01:56:37.767000 audit: BPF prog-id=20 op=UNLOAD Dec 13 01:56:37.768000 audit: BPF prog-id=33 op=LOAD Dec 13 01:56:37.768000 audit: BPF prog-id=24 op=UNLOAD Dec 13 01:56:37.768000 audit: BPF prog-id=34 op=LOAD Dec 13 01:56:37.768000 audit: BPF prog-id=35 op=LOAD Dec 13 01:56:37.768000 audit: BPF prog-id=25 op=UNLOAD Dec 13 01:56:37.768000 audit: BPF prog-id=26 op=UNLOAD Dec 13 01:56:37.771010 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 01:56:37.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.775043 systemd[1]: Starting audit-rules.service... Dec 13 01:56:37.777132 systemd[1]: Starting clean-ca-certificates.service... Dec 13 01:56:37.779199 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 01:56:37.780000 audit: BPF prog-id=36 op=LOAD Dec 13 01:56:37.782140 systemd[1]: Starting systemd-resolved.service... Dec 13 01:56:37.783000 audit: BPF prog-id=37 op=LOAD Dec 13 01:56:37.784783 systemd[1]: Starting systemd-timesyncd.service... Dec 13 01:56:37.786934 systemd[1]: Starting systemd-update-utmp.service... Dec 13 01:56:37.788645 systemd[1]: Finished clean-ca-certificates.service. Dec 13 01:56:37.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.790000 audit[1155]: SYSTEM_BOOT pid=1155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.794552 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 01:56:37.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:37.799152 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:56:37.799642 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:56:37.801094 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:56:37.800000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:56:37.800000 audit[1164]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffe31afe30 a2=420 a3=0 items=0 ppid=1144 pid=1164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:56:37.800000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:56:37.802308 augenrules[1164]: No rules Dec 13 01:56:37.803317 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:56:37.805332 systemd[1]: Starting modprobe@loop.service... Dec 13 01:56:37.806266 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:56:37.806426 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:56:37.807551 systemd[1]: Starting systemd-update-done.service... Dec 13 01:56:37.808518 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:56:37.808646 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:56:37.809650 systemd[1]: Finished audit-rules.service. Dec 13 01:56:37.810992 systemd[1]: Finished systemd-update-utmp.service. Dec 13 01:56:37.812302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:56:37.812452 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:56:37.814027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:56:37.814140 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:56:37.815753 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:56:37.815874 systemd[1]: Finished modprobe@loop.service. Dec 13 01:56:37.818249 systemd[1]: Finished systemd-update-done.service. Dec 13 01:56:37.820781 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:56:37.820996 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:56:37.822273 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:56:37.824224 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:56:37.826302 systemd[1]: Starting modprobe@loop.service... Dec 13 01:56:37.827294 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:56:37.827456 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:56:37.827597 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:56:37.827691 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:56:37.828781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:56:37.828927 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:56:37.830349 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:56:37.830569 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:56:37.832125 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:56:37.832305 systemd[1]: Finished modprobe@loop.service. Dec 13 01:56:37.833720 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:56:37.833896 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:56:37.836371 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:56:37.836590 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:56:37.837621 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:56:37.839298 systemd[1]: Starting modprobe@drm.service... Dec 13 01:56:37.840853 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:56:37.842731 systemd[1]: Starting modprobe@loop.service... Dec 13 01:56:37.843661 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:56:37.843767 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:56:37.843798 systemd-resolved[1150]: Positive Trust Anchors: Dec 13 01:56:37.843805 systemd-resolved[1150]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:56:37.843830 systemd-resolved[1150]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:56:37.844700 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:56:37.845895 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:56:37.845986 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:56:37.846907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:56:37.847014 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:56:37.848346 systemd[1]: Started systemd-timesyncd.service. Dec 13 01:56:38.909757 systemd-timesyncd[1154]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:56:38.909972 systemd-timesyncd[1154]: Initial clock synchronization to Fri 2024-12-13 01:56:38.909693 UTC. Dec 13 01:56:38.910054 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:56:38.910149 systemd[1]: Finished modprobe@drm.service. Dec 13 01:56:38.911186 systemd-resolved[1150]: Defaulting to hostname 'linux'. Dec 13 01:56:38.911411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:56:38.911508 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:56:38.912745 systemd[1]: Started systemd-resolved.service. Dec 13 01:56:38.913863 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:56:38.913958 systemd[1]: Finished modprobe@loop.service. Dec 13 01:56:38.915353 systemd[1]: Reached target network.target. Dec 13 01:56:38.916231 systemd[1]: Reached target nss-lookup.target. Dec 13 01:56:38.917124 systemd[1]: Reached target time-set.target. Dec 13 01:56:38.917969 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:56:38.917992 systemd[1]: Reached target sysinit.target. Dec 13 01:56:38.918924 systemd[1]: Started motdgen.path. Dec 13 01:56:38.919682 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 01:56:38.920926 systemd[1]: Started logrotate.timer. Dec 13 01:56:38.921768 systemd[1]: Started mdadm.timer. Dec 13 01:56:38.922488 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 01:56:38.923434 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:56:38.923456 systemd[1]: Reached target paths.target. Dec 13 01:56:38.924253 systemd[1]: Reached target timers.target. Dec 13 01:56:38.925310 systemd[1]: Listening on dbus.socket. Dec 13 01:56:38.926890 systemd[1]: Starting docker.socket... Dec 13 01:56:38.929139 systemd[1]: Listening on sshd.socket. Dec 13 01:56:38.930037 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:56:38.930072 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:56:38.930516 systemd[1]: Finished ensure-sysext.service. Dec 13 01:56:38.931475 systemd[1]: Listening on docker.socket. Dec 13 01:56:38.932980 systemd[1]: Reached target sockets.target. Dec 13 01:56:38.933806 systemd[1]: Reached target basic.target. Dec 13 01:56:38.934613 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:56:38.934641 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:56:38.935300 systemd[1]: Starting containerd.service... Dec 13 01:56:38.936778 systemd[1]: Starting dbus.service... Dec 13 01:56:38.938285 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 01:56:38.940169 systemd[1]: Starting extend-filesystems.service... Dec 13 01:56:38.941229 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 01:56:38.943157 jq[1187]: false Dec 13 01:56:38.941965 systemd[1]: Starting motdgen.service... Dec 13 01:56:38.943511 systemd[1]: Starting prepare-helm.service... Dec 13 01:56:38.944967 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 01:56:38.946593 systemd[1]: Starting sshd-keygen.service... Dec 13 01:56:38.949113 systemd[1]: Starting systemd-logind.service... Dec 13 01:56:38.949979 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:56:38.950028 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:56:38.950315 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:56:38.951350 systemd[1]: Starting update-engine.service... Dec 13 01:56:38.953281 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 01:56:38.956958 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:56:38.957095 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 01:56:38.957882 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:56:38.958013 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 01:56:38.966762 jq[1203]: true Dec 13 01:56:38.967033 extend-filesystems[1188]: Found loop1 Dec 13 01:56:38.967033 extend-filesystems[1188]: Found sr0 Dec 13 01:56:38.967033 extend-filesystems[1188]: Found vda Dec 13 01:56:38.967033 extend-filesystems[1188]: Found vda1 Dec 13 01:56:38.967033 extend-filesystems[1188]: Found vda2 Dec 13 01:56:38.967033 extend-filesystems[1188]: Found vda3 Dec 13 01:56:38.967033 extend-filesystems[1188]: Found usr Dec 13 01:56:38.967033 extend-filesystems[1188]: Found vda4 Dec 13 01:56:38.967033 extend-filesystems[1188]: Found vda6 Dec 13 01:56:38.981380 jq[1211]: true Dec 13 01:56:38.971937 systemd[1]: Started dbus.service. Dec 13 01:56:38.981694 tar[1207]: linux-amd64/helm Dec 13 01:56:38.981874 extend-filesystems[1188]: Found vda7 Dec 13 01:56:38.981874 extend-filesystems[1188]: Found vda9 Dec 13 01:56:38.981874 extend-filesystems[1188]: Checking size of /dev/vda9 Dec 13 01:56:38.971798 dbus-daemon[1186]: [system] SELinux support is enabled Dec 13 01:56:38.974479 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:56:38.974613 systemd[1]: Finished motdgen.service. Dec 13 01:56:38.975618 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:56:38.975674 systemd[1]: Reached target system-config.target. Dec 13 01:56:38.976625 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:56:38.976653 systemd[1]: Reached target user-config.target. Dec 13 01:56:38.994409 env[1209]: time="2024-12-13T01:56:38.994354594Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 01:56:38.995135 update_engine[1200]: I1213 01:56:38.994976 1200 main.cc:92] Flatcar Update Engine starting Dec 13 01:56:38.999042 extend-filesystems[1188]: Resized partition /dev/vda9 Dec 13 01:56:39.006029 update_engine[1200]: I1213 01:56:38.999219 1200 update_check_scheduler.cc:74] Next update check in 11m20s Dec 13 01:56:39.000266 systemd[1]: Started update-engine.service. Dec 13 01:56:39.004411 systemd[1]: Started locksmithd.service. Dec 13 01:56:39.013274 extend-filesystems[1238]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 01:56:39.030868 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:56:39.030938 bash[1231]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:56:39.021311 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 01:56:39.027494 systemd-logind[1198]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:56:39.027514 systemd-logind[1198]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:56:39.027854 systemd-logind[1198]: New seat seat0. Dec 13 01:56:39.031776 systemd[1]: Started systemd-logind.service. Dec 13 01:56:39.040561 env[1209]: time="2024-12-13T01:56:39.040515409Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:56:39.041176 env[1209]: time="2024-12-13T01:56:39.040645673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:56:39.041677 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:56:39.042408 env[1209]: time="2024-12-13T01:56:39.042369005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:56:39.042408 env[1209]: time="2024-12-13T01:56:39.042396918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:56:39.063255 env[1209]: time="2024-12-13T01:56:39.063217832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:56:39.063255 env[1209]: time="2024-12-13T01:56:39.063256484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:56:39.063255 env[1209]: time="2024-12-13T01:56:39.063270370Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 01:56:39.063408 env[1209]: time="2024-12-13T01:56:39.063279387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:56:39.063408 env[1209]: time="2024-12-13T01:56:39.063350851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:56:39.063558 env[1209]: time="2024-12-13T01:56:39.063541389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:56:39.063690 env[1209]: time="2024-12-13T01:56:39.063670321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:56:39.063690 env[1209]: time="2024-12-13T01:56:39.063687232Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:56:39.063792 env[1209]: time="2024-12-13T01:56:39.063750170Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 01:56:39.063792 env[1209]: time="2024-12-13T01:56:39.063760369Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:56:39.064075 extend-filesystems[1238]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:56:39.064075 extend-filesystems[1238]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:56:39.064075 extend-filesystems[1238]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:56:39.071766 extend-filesystems[1188]: Resized filesystem in /dev/vda9 Dec 13 01:56:39.066121 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:56:39.066289 systemd[1]: Finished extend-filesystems.service. Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074183310Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074233434Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074250265Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074297494Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074316971Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074377514Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074396079Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074415596Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074431345Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074446984Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074461381Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074475799Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074576097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:56:39.076023 env[1209]: time="2024-12-13T01:56:39.074670804Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.074947283Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.074972320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.074987438Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075039055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075052981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075066777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075080022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075093467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075117662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075130477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075145885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075161254Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075270740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075286078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076383 env[1209]: time="2024-12-13T01:56:39.075300375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076804 env[1209]: time="2024-12-13T01:56:39.075314562Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:56:39.076804 env[1209]: time="2024-12-13T01:56:39.075330882Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 01:56:39.076804 env[1209]: time="2024-12-13T01:56:39.075342925Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:56:39.076804 env[1209]: time="2024-12-13T01:56:39.075367431Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 01:56:39.076804 env[1209]: time="2024-12-13T01:56:39.075406374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:56:39.076980 env[1209]: time="2024-12-13T01:56:39.075613703Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:56:39.076980 env[1209]: time="2024-12-13T01:56:39.075694044Z" level=info msg="Connect containerd service" Dec 13 01:56:39.076980 env[1209]: time="2024-12-13T01:56:39.075737114Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:56:39.077802 env[1209]: time="2024-12-13T01:56:39.077778223Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:56:39.078004 env[1209]: time="2024-12-13T01:56:39.077968199Z" level=info msg="Start subscribing containerd event" Dec 13 01:56:39.078113 env[1209]: time="2024-12-13T01:56:39.078092643Z" level=info msg="Start recovering state" Dec 13 01:56:39.078250 env[1209]: time="2024-12-13T01:56:39.078231142Z" level=info msg="Start event monitor" Dec 13 01:56:39.078336 env[1209]: time="2024-12-13T01:56:39.078316823Z" level=info msg="Start snapshots syncer" Dec 13 01:56:39.078432 env[1209]: time="2024-12-13T01:56:39.078412984Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:56:39.078512 env[1209]: time="2024-12-13T01:56:39.078493134Z" level=info msg="Start streaming server" Dec 13 01:56:39.078910 env[1209]: time="2024-12-13T01:56:39.078893004Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:56:39.079059 env[1209]: time="2024-12-13T01:56:39.079042094Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:56:39.079246 systemd[1]: Started containerd.service. Dec 13 01:56:39.080748 env[1209]: time="2024-12-13T01:56:39.080727355Z" level=info msg="containerd successfully booted in 0.086948s" Dec 13 01:56:39.089841 locksmithd[1240]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:56:39.360176 sshd_keygen[1208]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:56:39.360593 tar[1207]: linux-amd64/LICENSE Dec 13 01:56:39.360774 tar[1207]: linux-amd64/README.md Dec 13 01:56:39.365340 systemd[1]: Finished prepare-helm.service. Dec 13 01:56:39.376739 systemd[1]: Finished sshd-keygen.service. Dec 13 01:56:39.378922 systemd[1]: Starting issuegen.service... Dec 13 01:56:39.383004 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:56:39.383152 systemd[1]: Finished issuegen.service. Dec 13 01:56:39.385066 systemd[1]: Starting systemd-user-sessions.service... Dec 13 01:56:39.389233 systemd[1]: Finished systemd-user-sessions.service. Dec 13 01:56:39.391267 systemd[1]: Started getty@tty1.service. Dec 13 01:56:39.393053 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 01:56:39.394224 systemd[1]: Reached target getty.target. Dec 13 01:56:39.631852 systemd-networkd[1032]: eth0: Gained IPv6LL Dec 13 01:56:39.633387 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:56:39.634761 systemd[1]: Reached target network-online.target. Dec 13 01:56:39.637065 systemd[1]: Starting kubelet.service... Dec 13 01:56:40.215406 systemd[1]: Started kubelet.service. Dec 13 01:56:40.216561 systemd[1]: Reached target multi-user.target. Dec 13 01:56:40.218439 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 01:56:40.224687 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 01:56:40.224837 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 01:56:40.226049 systemd[1]: Startup finished in 627ms (kernel) + 4.876s (initrd) + 5.587s (userspace) = 11.091s. Dec 13 01:56:40.652912 kubelet[1267]: E1213 01:56:40.652800 1267 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:56:40.654356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:56:40.654462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:56:47.940167 systemd[1]: Created slice system-sshd.slice. Dec 13 01:56:47.941125 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:53658.service. Dec 13 01:56:47.980373 sshd[1277]: Accepted publickey for core from 10.0.0.1 port 53658 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:56:47.981563 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:56:47.990454 systemd-logind[1198]: New session 1 of user core. Dec 13 01:56:47.991497 systemd[1]: Created slice user-500.slice. Dec 13 01:56:47.992751 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 01:56:48.000545 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 01:56:48.001935 systemd[1]: Starting user@500.service... Dec 13 01:56:48.004194 (systemd)[1280]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:56:48.068256 systemd[1280]: Queued start job for default target default.target. Dec 13 01:56:48.068684 systemd[1280]: Reached target paths.target. Dec 13 01:56:48.068709 systemd[1280]: Reached target sockets.target. Dec 13 01:56:48.068724 systemd[1280]: Reached target timers.target. Dec 13 01:56:48.068737 systemd[1280]: Reached target basic.target. Dec 13 01:56:48.068779 systemd[1280]: Reached target default.target. Dec 13 01:56:48.068810 systemd[1280]: Startup finished in 59ms. Dec 13 01:56:48.068895 systemd[1]: Started user@500.service. Dec 13 01:56:48.070036 systemd[1]: Started session-1.scope. Dec 13 01:56:48.121401 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:53666.service. Dec 13 01:56:48.161991 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 53666 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:56:48.163262 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:56:48.167572 systemd-logind[1198]: New session 2 of user core. Dec 13 01:56:48.168455 systemd[1]: Started session-2.scope. Dec 13 01:56:48.221183 sshd[1289]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:48.223695 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:53666.service: Deactivated successfully. Dec 13 01:56:48.224301 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:56:48.224820 systemd-logind[1198]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:56:48.225963 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:53676.service. Dec 13 01:56:48.226601 systemd-logind[1198]: Removed session 2. Dec 13 01:56:48.261355 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 53676 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:56:48.262391 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:56:48.265249 systemd-logind[1198]: New session 3 of user core. Dec 13 01:56:48.265932 systemd[1]: Started session-3.scope. Dec 13 01:56:48.314416 sshd[1295]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:48.316803 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:53676.service: Deactivated successfully. Dec 13 01:56:48.317252 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:56:48.317684 systemd-logind[1198]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:56:48.318592 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:53684.service. Dec 13 01:56:48.319193 systemd-logind[1198]: Removed session 3. Dec 13 01:56:48.354124 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 53684 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:56:48.355199 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:56:48.358343 systemd-logind[1198]: New session 4 of user core. Dec 13 01:56:48.359017 systemd[1]: Started session-4.scope. Dec 13 01:56:48.410876 sshd[1301]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:48.413346 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:53684.service: Deactivated successfully. Dec 13 01:56:48.413848 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:56:48.414256 systemd-logind[1198]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:56:48.415107 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:53700.service. Dec 13 01:56:48.415724 systemd-logind[1198]: Removed session 4. Dec 13 01:56:48.451652 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 53700 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:56:48.452783 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:56:48.456081 systemd-logind[1198]: New session 5 of user core. Dec 13 01:56:48.456777 systemd[1]: Started session-5.scope. Dec 13 01:56:48.509942 sudo[1310]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:56:48.510144 sudo[1310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:56:48.528672 systemd[1]: Starting docker.service... Dec 13 01:56:48.561693 env[1322]: time="2024-12-13T01:56:48.561626702Z" level=info msg="Starting up" Dec 13 01:56:48.562786 env[1322]: time="2024-12-13T01:56:48.562744418Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:56:48.562786 env[1322]: time="2024-12-13T01:56:48.562769846Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:56:48.562786 env[1322]: time="2024-12-13T01:56:48.562789683Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:56:48.562786 env[1322]: time="2024-12-13T01:56:48.562798770Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:56:48.564087 env[1322]: time="2024-12-13T01:56:48.564060687Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:56:48.564087 env[1322]: time="2024-12-13T01:56:48.564079052Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:56:48.564149 env[1322]: time="2024-12-13T01:56:48.564090533Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:56:48.564149 env[1322]: time="2024-12-13T01:56:48.564102085Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:56:48.568100 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1121325317-merged.mount: Deactivated successfully. Dec 13 01:56:49.310006 env[1322]: time="2024-12-13T01:56:49.309971974Z" level=info msg="Loading containers: start." Dec 13 01:56:49.502666 kernel: Initializing XFRM netlink socket Dec 13 01:56:49.529479 env[1322]: time="2024-12-13T01:56:49.529441205Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 01:56:49.576486 systemd-networkd[1032]: docker0: Link UP Dec 13 01:56:49.652522 env[1322]: time="2024-12-13T01:56:49.652482045Z" level=info msg="Loading containers: done." Dec 13 01:56:49.661969 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2537483242-merged.mount: Deactivated successfully. Dec 13 01:56:49.666823 env[1322]: time="2024-12-13T01:56:49.666788329Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:56:49.666964 env[1322]: time="2024-12-13T01:56:49.666946285Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 01:56:49.667035 env[1322]: time="2024-12-13T01:56:49.667019563Z" level=info msg="Daemon has completed initialization" Dec 13 01:56:49.688335 systemd[1]: Started docker.service. Dec 13 01:56:49.691846 env[1322]: time="2024-12-13T01:56:49.691813269Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:56:50.358015 env[1209]: time="2024-12-13T01:56:50.357970028Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:56:50.665250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:56:50.665433 systemd[1]: Stopped kubelet.service. Dec 13 01:56:50.666565 systemd[1]: Starting kubelet.service... Dec 13 01:56:50.759146 systemd[1]: Started kubelet.service. Dec 13 01:56:51.062507 kubelet[1463]: E1213 01:56:51.062375 1463 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:56:51.065301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:56:51.065432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:56:51.517229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876842594.mount: Deactivated successfully. Dec 13 01:56:53.379578 env[1209]: time="2024-12-13T01:56:53.379509914Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:53.381458 env[1209]: time="2024-12-13T01:56:53.381425096Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:53.383105 env[1209]: time="2024-12-13T01:56:53.383081543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:53.384699 env[1209]: time="2024-12-13T01:56:53.384669331Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:53.385309 env[1209]: time="2024-12-13T01:56:53.385275318Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:56:53.393432 env[1209]: time="2024-12-13T01:56:53.393400108Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:56:57.406941 env[1209]: time="2024-12-13T01:56:57.406847899Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:57.412878 env[1209]: time="2024-12-13T01:56:57.412819269Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:57.416079 env[1209]: time="2024-12-13T01:56:57.416043427Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:57.417884 env[1209]: time="2024-12-13T01:56:57.417823906Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:57.418397 env[1209]: time="2024-12-13T01:56:57.418365572Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:56:57.427806 env[1209]: time="2024-12-13T01:56:57.427755195Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:56:58.881780 env[1209]: time="2024-12-13T01:56:58.881726579Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:58.884379 env[1209]: time="2024-12-13T01:56:58.884340783Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:58.886188 env[1209]: time="2024-12-13T01:56:58.886146790Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:58.887549 env[1209]: time="2024-12-13T01:56:58.887516389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:56:58.888208 env[1209]: time="2024-12-13T01:56:58.888151130Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:56:58.896519 env[1209]: time="2024-12-13T01:56:58.896487266Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:57:00.407871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688442830.mount: Deactivated successfully. Dec 13 01:57:01.165280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:57:01.165453 systemd[1]: Stopped kubelet.service. Dec 13 01:57:01.166677 systemd[1]: Starting kubelet.service... Dec 13 01:57:01.246136 systemd[1]: Started kubelet.service. Dec 13 01:57:01.481333 env[1209]: time="2024-12-13T01:57:01.481204968Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:01.483005 env[1209]: time="2024-12-13T01:57:01.482980448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:01.484438 env[1209]: time="2024-12-13T01:57:01.484377168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:01.485659 env[1209]: time="2024-12-13T01:57:01.485621201Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:01.486086 env[1209]: time="2024-12-13T01:57:01.486051679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:57:01.494862 env[1209]: time="2024-12-13T01:57:01.494821729Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:57:01.495269 kubelet[1499]: E1213 01:57:01.495239 1499 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:57:01.497213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:57:01.497334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:57:02.083931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354005628.mount: Deactivated successfully. Dec 13 01:57:03.986932 env[1209]: time="2024-12-13T01:57:03.986875630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:04.054825 env[1209]: time="2024-12-13T01:57:04.054773106Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:04.156435 env[1209]: time="2024-12-13T01:57:04.156377777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:04.185312 env[1209]: time="2024-12-13T01:57:04.185261024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:04.186142 env[1209]: time="2024-12-13T01:57:04.186106450Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:57:04.194748 env[1209]: time="2024-12-13T01:57:04.194713725Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:57:04.976344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4251744730.mount: Deactivated successfully. Dec 13 01:57:04.991547 env[1209]: time="2024-12-13T01:57:04.991490240Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:04.993360 env[1209]: time="2024-12-13T01:57:04.993326544Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:04.994688 env[1209]: time="2024-12-13T01:57:04.994660707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:04.996077 env[1209]: time="2024-12-13T01:57:04.996052126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:04.996445 env[1209]: time="2024-12-13T01:57:04.996419035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:57:05.004232 env[1209]: time="2024-12-13T01:57:05.004195511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:57:05.642071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844105322.mount: Deactivated successfully. Dec 13 01:57:08.955055 env[1209]: time="2024-12-13T01:57:08.954984282Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:08.957593 env[1209]: time="2024-12-13T01:57:08.957540677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:08.960509 env[1209]: time="2024-12-13T01:57:08.960469410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:08.962539 env[1209]: time="2024-12-13T01:57:08.962499197Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:08.963293 env[1209]: time="2024-12-13T01:57:08.963244004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:57:11.209907 systemd[1]: Stopped kubelet.service. Dec 13 01:57:11.211737 systemd[1]: Starting kubelet.service... Dec 13 01:57:11.224687 systemd[1]: Reloading. Dec 13 01:57:11.289681 /usr/lib/systemd/system-generators/torcx-generator[1629]: time="2024-12-13T01:57:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:57:11.289707 /usr/lib/systemd/system-generators/torcx-generator[1629]: time="2024-12-13T01:57:11Z" level=info msg="torcx already run" Dec 13 01:57:11.617481 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:57:11.617498 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:57:11.634196 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:11.711885 systemd[1]: Started kubelet.service. Dec 13 01:57:11.713373 systemd[1]: Stopping kubelet.service... Dec 13 01:57:11.713781 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:57:11.713968 systemd[1]: Stopped kubelet.service. Dec 13 01:57:11.715416 systemd[1]: Starting kubelet.service... Dec 13 01:57:11.786244 systemd[1]: Started kubelet.service. Dec 13 01:57:11.825781 kubelet[1678]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:57:11.825781 kubelet[1678]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:57:11.825781 kubelet[1678]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:57:11.826134 kubelet[1678]: I1213 01:57:11.825812 1678 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:57:12.285970 kubelet[1678]: I1213 01:57:12.285924 1678 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:57:12.285970 kubelet[1678]: I1213 01:57:12.285954 1678 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:57:12.286210 kubelet[1678]: I1213 01:57:12.286187 1678 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:57:12.302004 kubelet[1678]: E1213 01:57:12.301970 1678 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:12.303189 kubelet[1678]: I1213 01:57:12.303158 1678 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:57:12.312024 kubelet[1678]: I1213 01:57:12.311995 1678 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:57:12.312178 kubelet[1678]: I1213 01:57:12.312148 1678 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:57:12.312320 kubelet[1678]: I1213 01:57:12.312169 1678 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:57:12.312398 kubelet[1678]: I1213 01:57:12.312322 1678 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:57:12.312398 kubelet[1678]: I1213 01:57:12.312331 1678 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:57:12.312448 kubelet[1678]: I1213 01:57:12.312422 1678 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:57:12.313024 kubelet[1678]: I1213 01:57:12.313004 1678 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:57:12.313024 kubelet[1678]: I1213 01:57:12.313021 1678 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:57:12.313075 kubelet[1678]: I1213 01:57:12.313038 1678 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:57:12.313075 kubelet[1678]: I1213 01:57:12.313050 1678 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:57:12.313598 kubelet[1678]: W1213 01:57:12.313555 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:12.313630 kubelet[1678]: E1213 01:57:12.313601 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:12.320319 kubelet[1678]: I1213 01:57:12.320300 1678 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:57:12.320418 kubelet[1678]: W1213 01:57:12.320350 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:12.320607 kubelet[1678]: E1213 01:57:12.320585 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:12.324394 kubelet[1678]: I1213 01:57:12.324376 1678 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:57:12.324464 kubelet[1678]: W1213 01:57:12.324420 1678 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:57:12.324938 kubelet[1678]: I1213 01:57:12.324919 1678 server.go:1264] "Started kubelet" Dec 13 01:57:12.325214 kubelet[1678]: I1213 01:57:12.325194 1678 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:57:12.326163 kubelet[1678]: I1213 01:57:12.326142 1678 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:57:12.327960 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 01:57:12.328094 kubelet[1678]: I1213 01:57:12.328063 1678 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:57:12.328579 kubelet[1678]: I1213 01:57:12.328513 1678 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:57:12.328816 kubelet[1678]: I1213 01:57:12.328795 1678 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:57:12.330353 kubelet[1678]: I1213 01:57:12.330326 1678 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:57:12.330647 kubelet[1678]: I1213 01:57:12.330606 1678 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:57:12.330706 kubelet[1678]: I1213 01:57:12.330678 1678 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:57:12.331351 kubelet[1678]: W1213 01:57:12.331311 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:12.331351 kubelet[1678]: E1213 01:57:12.331349 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:12.331459 kubelet[1678]: E1213 01:57:12.331386 1678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Dec 13 01:57:12.331620 kubelet[1678]: I1213 01:57:12.331591 1678 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:57:12.331704 kubelet[1678]: I1213 01:57:12.331663 1678 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:57:12.332664 kubelet[1678]: I1213 01:57:12.332626 1678 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:57:12.335278 kubelet[1678]: E1213 01:57:12.335183 1678 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181099dd4eb4457b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:57:12.324900219 +0000 UTC m=+0.533997579,LastTimestamp:2024-12-13 01:57:12.324900219 +0000 UTC m=+0.533997579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:57:12.335398 kubelet[1678]: E1213 01:57:12.335343 1678 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:57:12.341445 kubelet[1678]: I1213 01:57:12.341393 1678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:57:12.342475 kubelet[1678]: I1213 01:57:12.342453 1678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:57:12.342475 kubelet[1678]: I1213 01:57:12.342475 1678 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:57:12.342562 kubelet[1678]: I1213 01:57:12.342492 1678 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:57:12.342562 kubelet[1678]: E1213 01:57:12.342520 1678 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:57:12.345136 kubelet[1678]: W1213 01:57:12.345116 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:12.345191 kubelet[1678]: E1213 01:57:12.345144 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:12.345415 kubelet[1678]: I1213 01:57:12.345403 1678 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:57:12.345498 kubelet[1678]: I1213 01:57:12.345478 1678 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:57:12.345549 kubelet[1678]: I1213 01:57:12.345507 1678 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:57:12.431906 kubelet[1678]: I1213 01:57:12.431853 1678 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:12.432424 kubelet[1678]: E1213 01:57:12.432381 1678 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Dec 13 01:57:12.443436 kubelet[1678]: E1213 01:57:12.443417 1678 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:57:12.532074 kubelet[1678]: E1213 01:57:12.532030 1678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Dec 13 01:57:12.634397 kubelet[1678]: I1213 01:57:12.634305 1678 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:12.634673 kubelet[1678]: E1213 01:57:12.634617 1678 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Dec 13 01:57:12.643813 kubelet[1678]: E1213 01:57:12.643781 1678 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:57:12.932950 kubelet[1678]: E1213 01:57:12.932869 1678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Dec 13 01:57:13.036578 kubelet[1678]: I1213 01:57:13.036529 1678 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:13.036839 kubelet[1678]: E1213 01:57:13.036810 1678 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Dec 13 01:57:13.043973 kubelet[1678]: E1213 01:57:13.043922 1678 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:57:13.385677 kubelet[1678]: W1213 01:57:13.385519 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:13.385677 kubelet[1678]: E1213 01:57:13.385562 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:13.465249 kubelet[1678]: W1213 01:57:13.465147 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:13.465249 kubelet[1678]: E1213 01:57:13.465233 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:13.591134 kubelet[1678]: W1213 01:57:13.591060 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:13.591134 kubelet[1678]: E1213 01:57:13.591112 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:13.734113 kubelet[1678]: E1213 01:57:13.734051 1678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="1.6s" Dec 13 01:57:13.838867 kubelet[1678]: I1213 01:57:13.838833 1678 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:13.839242 kubelet[1678]: E1213 01:57:13.839200 1678 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Dec 13 01:57:13.843284 kubelet[1678]: I1213 01:57:13.843258 1678 policy_none.go:49] "None policy: Start" Dec 13 01:57:13.843927 kubelet[1678]: I1213 01:57:13.843889 1678 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:57:13.843986 kubelet[1678]: I1213 01:57:13.843946 1678 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:57:13.844089 kubelet[1678]: E1213 01:57:13.844071 1678 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:57:13.870862 kubelet[1678]: W1213 01:57:13.870800 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:13.870862 kubelet[1678]: E1213 01:57:13.870851 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:13.879004 systemd[1]: Created slice kubepods.slice. Dec 13 01:57:13.882567 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 01:57:13.884795 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 01:57:13.892272 kubelet[1678]: I1213 01:57:13.892235 1678 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:57:13.892451 kubelet[1678]: I1213 01:57:13.892407 1678 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:57:13.892546 kubelet[1678]: I1213 01:57:13.892521 1678 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:57:13.893525 kubelet[1678]: E1213 01:57:13.893495 1678 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:57:14.492737 kubelet[1678]: E1213 01:57:14.492695 1678 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:15.334388 kubelet[1678]: E1213 01:57:15.334340 1678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="3.2s" Dec 13 01:57:15.440420 kubelet[1678]: I1213 01:57:15.440398 1678 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:15.440629 kubelet[1678]: E1213 01:57:15.440604 1678 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Dec 13 01:57:15.444761 kubelet[1678]: I1213 01:57:15.444736 1678 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:57:15.445318 kubelet[1678]: I1213 01:57:15.445297 1678 topology_manager.go:215] "Topology Admit Handler" podUID="35b4ad3e5a2571cbcaaa32aea5b94b1b" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:57:15.445850 kubelet[1678]: I1213 01:57:15.445835 1678 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:57:15.447798 kubelet[1678]: W1213 01:57:15.447755 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:15.447798 kubelet[1678]: E1213 01:57:15.447789 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:15.450151 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 01:57:15.460350 systemd[1]: Created slice kubepods-burstable-pod35b4ad3e5a2571cbcaaa32aea5b94b1b.slice. Dec 13 01:57:15.472135 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 01:57:15.546707 kubelet[1678]: I1213 01:57:15.546682 1678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35b4ad3e5a2571cbcaaa32aea5b94b1b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"35b4ad3e5a2571cbcaaa32aea5b94b1b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:15.547009 kubelet[1678]: I1213 01:57:15.546720 1678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:15.547009 kubelet[1678]: I1213 01:57:15.546757 1678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:15.547009 kubelet[1678]: I1213 01:57:15.546772 1678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:15.547009 kubelet[1678]: I1213 01:57:15.546786 1678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:57:15.547009 kubelet[1678]: I1213 01:57:15.546810 1678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35b4ad3e5a2571cbcaaa32aea5b94b1b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"35b4ad3e5a2571cbcaaa32aea5b94b1b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:15.547166 kubelet[1678]: I1213 01:57:15.546829 1678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35b4ad3e5a2571cbcaaa32aea5b94b1b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"35b4ad3e5a2571cbcaaa32aea5b94b1b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:15.547166 kubelet[1678]: I1213 01:57:15.546849 1678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:15.547166 kubelet[1678]: I1213 01:57:15.546868 1678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:15.695710 kubelet[1678]: E1213 01:57:15.695624 1678 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181099dd4eb4457b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:57:12.324900219 +0000 UTC m=+0.533997579,LastTimestamp:2024-12-13 01:57:12.324900219 +0000 UTC m=+0.533997579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:57:15.759133 kubelet[1678]: E1213 01:57:15.759113 1678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:15.759759 env[1209]: time="2024-12-13T01:57:15.759721018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:15.770803 kubelet[1678]: E1213 01:57:15.770781 1678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:15.771099 env[1209]: time="2024-12-13T01:57:15.771054618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:35b4ad3e5a2571cbcaaa32aea5b94b1b,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:15.774292 kubelet[1678]: E1213 01:57:15.774272 1678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:15.774646 env[1209]: time="2024-12-13T01:57:15.774586960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:16.141998 kubelet[1678]: W1213 01:57:16.141899 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:16.141998 kubelet[1678]: E1213 01:57:16.141940 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:16.225471 kubelet[1678]: W1213 01:57:16.225435 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:16.225579 kubelet[1678]: E1213 01:57:16.225475 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:16.293996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740158367.mount: Deactivated successfully. Dec 13 01:57:16.300818 env[1209]: time="2024-12-13T01:57:16.300775221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.303549 env[1209]: time="2024-12-13T01:57:16.303501477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.305170 env[1209]: time="2024-12-13T01:57:16.305118227Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.306112 env[1209]: time="2024-12-13T01:57:16.306082285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.307776 env[1209]: time="2024-12-13T01:57:16.307750544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.309023 env[1209]: time="2024-12-13T01:57:16.309000250Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.310464 env[1209]: time="2024-12-13T01:57:16.310434670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.311743 env[1209]: time="2024-12-13T01:57:16.311712299Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.313674 env[1209]: time="2024-12-13T01:57:16.313649824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.316690 env[1209]: time="2024-12-13T01:57:16.316658803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.317357 env[1209]: time="2024-12-13T01:57:16.317329168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.318109 env[1209]: time="2024-12-13T01:57:16.318077853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:16.337903 env[1209]: time="2024-12-13T01:57:16.337837958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:16.337903 env[1209]: time="2024-12-13T01:57:16.337876431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:16.337903 env[1209]: time="2024-12-13T01:57:16.337888595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:16.338130 env[1209]: time="2024-12-13T01:57:16.338064512Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e0cc6ec911eecda8626c4f91a1f6f53cc9fc2adcbc42a4dc465f4c6761d499e pid=1719 runtime=io.containerd.runc.v2 Dec 13 01:57:16.345792 env[1209]: time="2024-12-13T01:57:16.345625268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:16.345792 env[1209]: time="2024-12-13T01:57:16.345688509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:16.345792 env[1209]: time="2024-12-13T01:57:16.345698539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:16.346039 env[1209]: time="2024-12-13T01:57:16.345923169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa5c186787e2e43c0d3dbf3bc6979d5c97077f807799ba9b1d7e28ca2a82b65d pid=1744 runtime=io.containerd.runc.v2 Dec 13 01:57:16.346292 env[1209]: time="2024-12-13T01:57:16.346243934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:16.346372 env[1209]: time="2024-12-13T01:57:16.346289912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:16.346372 env[1209]: time="2024-12-13T01:57:16.346304290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:16.346487 env[1209]: time="2024-12-13T01:57:16.346399813Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/627eb9022eb44d766d831e3f68cf6ed1870f854f3710fbf1c86f6954381f8522 pid=1745 runtime=io.containerd.runc.v2 Dec 13 01:57:16.354981 systemd[1]: Started cri-containerd-9e0cc6ec911eecda8626c4f91a1f6f53cc9fc2adcbc42a4dc465f4c6761d499e.scope. Dec 13 01:57:16.359612 systemd[1]: Started cri-containerd-aa5c186787e2e43c0d3dbf3bc6979d5c97077f807799ba9b1d7e28ca2a82b65d.scope. Dec 13 01:57:16.369361 systemd[1]: Started cri-containerd-627eb9022eb44d766d831e3f68cf6ed1870f854f3710fbf1c86f6954381f8522.scope. Dec 13 01:57:16.397219 env[1209]: time="2024-12-13T01:57:16.397113532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e0cc6ec911eecda8626c4f91a1f6f53cc9fc2adcbc42a4dc465f4c6761d499e\"" Dec 13 01:57:16.400321 kubelet[1678]: E1213 01:57:16.400293 1678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:16.402789 env[1209]: time="2024-12-13T01:57:16.402764085Z" level=info msg="CreateContainer within sandbox \"9e0cc6ec911eecda8626c4f91a1f6f53cc9fc2adcbc42a4dc465f4c6761d499e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:57:16.416093 env[1209]: time="2024-12-13T01:57:16.415688915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa5c186787e2e43c0d3dbf3bc6979d5c97077f807799ba9b1d7e28ca2a82b65d\"" Dec 13 01:57:16.416093 env[1209]: time="2024-12-13T01:57:16.415856315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:35b4ad3e5a2571cbcaaa32aea5b94b1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"627eb9022eb44d766d831e3f68cf6ed1870f854f3710fbf1c86f6954381f8522\"" Dec 13 01:57:16.417177 kubelet[1678]: E1213 01:57:16.417152 1678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:16.417222 kubelet[1678]: E1213 01:57:16.417188 1678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:16.419070 env[1209]: time="2024-12-13T01:57:16.419037595Z" level=info msg="CreateContainer within sandbox \"9e0cc6ec911eecda8626c4f91a1f6f53cc9fc2adcbc42a4dc465f4c6761d499e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"63898c2c9635863bcf5f2c3cd50dda269694faae941c3b20253c229369b129c4\"" Dec 13 01:57:16.419721 env[1209]: time="2024-12-13T01:57:16.419696869Z" level=info msg="StartContainer for \"63898c2c9635863bcf5f2c3cd50dda269694faae941c3b20253c229369b129c4\"" Dec 13 01:57:16.419799 env[1209]: time="2024-12-13T01:57:16.419701427Z" level=info msg="CreateContainer within sandbox \"aa5c186787e2e43c0d3dbf3bc6979d5c97077f807799ba9b1d7e28ca2a82b65d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:57:16.420055 env[1209]: time="2024-12-13T01:57:16.420015870Z" level=info msg="CreateContainer within sandbox \"627eb9022eb44d766d831e3f68cf6ed1870f854f3710fbf1c86f6954381f8522\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:57:16.434615 systemd[1]: Started cri-containerd-63898c2c9635863bcf5f2c3cd50dda269694faae941c3b20253c229369b129c4.scope. Dec 13 01:57:16.513724 env[1209]: time="2024-12-13T01:57:16.513652542Z" level=info msg="StartContainer for \"63898c2c9635863bcf5f2c3cd50dda269694faae941c3b20253c229369b129c4\" returns successfully" Dec 13 01:57:16.688509 kubelet[1678]: W1213 01:57:16.688439 1678 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:16.688509 kubelet[1678]: E1213 01:57:16.688506 1678 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 13 01:57:16.715584 env[1209]: time="2024-12-13T01:57:16.715525936Z" level=info msg="CreateContainer within sandbox \"aa5c186787e2e43c0d3dbf3bc6979d5c97077f807799ba9b1d7e28ca2a82b65d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"262029874010d16bc2a80635d742cb84744dfba3d24b5a015a646a7699fd15dc\"" Dec 13 01:57:16.716025 env[1209]: time="2024-12-13T01:57:16.716006608Z" level=info msg="StartContainer for \"262029874010d16bc2a80635d742cb84744dfba3d24b5a015a646a7699fd15dc\"" Dec 13 01:57:16.730326 systemd[1]: Started cri-containerd-262029874010d16bc2a80635d742cb84744dfba3d24b5a015a646a7699fd15dc.scope. Dec 13 01:57:16.854261 env[1209]: time="2024-12-13T01:57:16.854182140Z" level=info msg="CreateContainer within sandbox \"627eb9022eb44d766d831e3f68cf6ed1870f854f3710fbf1c86f6954381f8522\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"70bb20595910ea7f08932919239745428a84bf18ca3b8af9721b04774ef3d65b\"" Dec 13 01:57:16.854897 env[1209]: time="2024-12-13T01:57:16.854862474Z" level=info msg="StartContainer for \"70bb20595910ea7f08932919239745428a84bf18ca3b8af9721b04774ef3d65b\"" Dec 13 01:57:16.868778 systemd[1]: Started cri-containerd-70bb20595910ea7f08932919239745428a84bf18ca3b8af9721b04774ef3d65b.scope. Dec 13 01:57:16.892665 env[1209]: time="2024-12-13T01:57:16.892572112Z" level=info msg="StartContainer for \"262029874010d16bc2a80635d742cb84744dfba3d24b5a015a646a7699fd15dc\" returns successfully" Dec 13 01:57:17.066765 env[1209]: time="2024-12-13T01:57:17.066674344Z" level=info msg="StartContainer for \"70bb20595910ea7f08932919239745428a84bf18ca3b8af9721b04774ef3d65b\" returns successfully" Dec 13 01:57:17.354092 kubelet[1678]: E1213 01:57:17.353993 1678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:17.355330 kubelet[1678]: E1213 01:57:17.355305 1678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:17.357123 kubelet[1678]: E1213 01:57:17.357100 1678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:18.359174 kubelet[1678]: E1213 01:57:18.359147 1678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:18.537266 kubelet[1678]: E1213 01:57:18.537228 1678 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:57:18.569698 kubelet[1678]: E1213 01:57:18.569658 1678 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:57:18.642724 kubelet[1678]: I1213 01:57:18.642616 1678 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:18.648333 kubelet[1678]: I1213 01:57:18.648313 1678 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:57:18.653959 kubelet[1678]: E1213 01:57:18.653914 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:18.754058 kubelet[1678]: E1213 01:57:18.754026 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:18.855085 kubelet[1678]: E1213 01:57:18.855050 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:18.955166 kubelet[1678]: E1213 01:57:18.955123 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:19.055730 kubelet[1678]: E1213 01:57:19.055696 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:19.156321 kubelet[1678]: E1213 01:57:19.156269 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:19.256912 kubelet[1678]: E1213 01:57:19.256811 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:19.357320 kubelet[1678]: E1213 01:57:19.357266 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:19.457881 kubelet[1678]: E1213 01:57:19.457826 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:19.558530 kubelet[1678]: E1213 01:57:19.558390 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:19.658910 kubelet[1678]: E1213 01:57:19.658854 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:19.759536 kubelet[1678]: E1213 01:57:19.759482 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:19.860289 kubelet[1678]: E1213 01:57:19.860166 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:19.960896 kubelet[1678]: E1213 01:57:19.960842 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:20.061430 kubelet[1678]: E1213 01:57:20.061377 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:20.085073 kubelet[1678]: E1213 01:57:20.085041 1678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:20.162194 kubelet[1678]: E1213 01:57:20.162105 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:20.237801 systemd[1]: Reloading. Dec 13 01:57:20.262665 kubelet[1678]: E1213 01:57:20.262627 1678 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:20.308029 /usr/lib/systemd/system-generators/torcx-generator[1968]: time="2024-12-13T01:57:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:57:20.308061 /usr/lib/systemd/system-generators/torcx-generator[1968]: time="2024-12-13T01:57:20Z" level=info msg="torcx already run" Dec 13 01:57:20.317692 kubelet[1678]: I1213 01:57:20.317653 1678 apiserver.go:52] "Watching apiserver" Dec 13 01:57:20.331020 kubelet[1678]: I1213 01:57:20.330961 1678 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:57:20.567099 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:57:20.567114 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:57:20.583760 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:20.677571 systemd[1]: Stopping kubelet.service... Dec 13 01:57:20.697237 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:57:20.697415 systemd[1]: Stopped kubelet.service. Dec 13 01:57:20.699155 systemd[1]: Starting kubelet.service... Dec 13 01:57:20.777315 systemd[1]: Started kubelet.service. Dec 13 01:57:20.812960 kubelet[2013]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:57:20.812960 kubelet[2013]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:57:20.812960 kubelet[2013]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:57:20.813375 kubelet[2013]: I1213 01:57:20.813009 2013 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:57:20.817756 kubelet[2013]: I1213 01:57:20.817672 2013 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:57:20.817756 kubelet[2013]: I1213 01:57:20.817692 2013 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:57:20.817898 kubelet[2013]: I1213 01:57:20.817892 2013 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:57:20.819126 kubelet[2013]: I1213 01:57:20.819098 2013 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:57:20.820231 kubelet[2013]: I1213 01:57:20.820203 2013 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:57:20.825950 kubelet[2013]: I1213 01:57:20.825923 2013 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:57:20.826142 kubelet[2013]: I1213 01:57:20.826118 2013 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:57:20.826282 kubelet[2013]: I1213 01:57:20.826142 2013 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:57:20.826380 kubelet[2013]: I1213 01:57:20.826291 2013 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:57:20.826380 kubelet[2013]: I1213 01:57:20.826301 2013 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:57:20.826380 kubelet[2013]: I1213 01:57:20.826335 2013 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:57:20.826461 kubelet[2013]: I1213 01:57:20.826401 2013 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:57:20.826461 kubelet[2013]: I1213 01:57:20.826412 2013 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:57:20.826461 kubelet[2013]: I1213 01:57:20.826428 2013 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:57:20.826461 kubelet[2013]: I1213 01:57:20.826440 2013 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:57:20.826902 kubelet[2013]: I1213 01:57:20.826882 2013 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:57:20.827217 kubelet[2013]: I1213 01:57:20.827001 2013 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:57:20.827315 kubelet[2013]: I1213 01:57:20.827299 2013 server.go:1264] "Started kubelet" Dec 13 01:57:20.827722 kubelet[2013]: I1213 01:57:20.827677 2013 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:57:20.827929 kubelet[2013]: I1213 01:57:20.827914 2013 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:57:20.828119 kubelet[2013]: I1213 01:57:20.828094 2013 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:57:20.828884 kubelet[2013]: I1213 01:57:20.828866 2013 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:57:20.829162 kubelet[2013]: I1213 01:57:20.829149 2013 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:57:20.830431 kubelet[2013]: E1213 01:57:20.830401 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:20.830488 kubelet[2013]: I1213 01:57:20.830446 2013 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:57:20.830518 kubelet[2013]: I1213 01:57:20.830513 2013 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:57:20.830608 kubelet[2013]: I1213 01:57:20.830597 2013 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:57:20.835165 kubelet[2013]: I1213 01:57:20.835149 2013 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:57:20.835296 kubelet[2013]: I1213 01:57:20.835274 2013 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:57:20.837712 kubelet[2013]: I1213 01:57:20.837684 2013 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:57:20.850445 kubelet[2013]: I1213 01:57:20.850410 2013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:57:20.852657 kubelet[2013]: I1213 01:57:20.852616 2013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:57:20.852767 kubelet[2013]: I1213 01:57:20.852751 2013 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:57:20.852858 kubelet[2013]: I1213 01:57:20.852842 2013 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:57:20.853053 kubelet[2013]: E1213 01:57:20.852959 2013 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:57:20.869976 kubelet[2013]: I1213 01:57:20.869948 2013 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:57:20.869976 kubelet[2013]: I1213 01:57:20.869964 2013 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:57:20.869976 kubelet[2013]: I1213 01:57:20.869979 2013 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:57:20.870139 kubelet[2013]: I1213 01:57:20.870126 2013 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:57:20.870171 kubelet[2013]: I1213 01:57:20.870135 2013 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:57:20.870171 kubelet[2013]: I1213 01:57:20.870153 2013 policy_none.go:49] "None policy: Start" Dec 13 01:57:20.870631 kubelet[2013]: I1213 01:57:20.870614 2013 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:57:20.870728 kubelet[2013]: I1213 01:57:20.870714 2013 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:57:20.870947 kubelet[2013]: I1213 01:57:20.870933 2013 state_mem.go:75] "Updated machine memory state" Dec 13 01:57:20.877934 kubelet[2013]: I1213 01:57:20.877913 2013 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:57:20.878163 kubelet[2013]: I1213 01:57:20.878080 2013 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:57:20.878259 kubelet[2013]: I1213 01:57:20.878169 2013 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:57:20.934921 kubelet[2013]: I1213 01:57:20.934874 2013 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:20.954069 kubelet[2013]: I1213 01:57:20.954019 2013 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:57:20.954193 kubelet[2013]: I1213 01:57:20.954107 2013 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:57:20.954193 kubelet[2013]: I1213 01:57:20.954158 2013 topology_manager.go:215] "Topology Admit Handler" podUID="35b4ad3e5a2571cbcaaa32aea5b94b1b" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:57:20.967333 kubelet[2013]: I1213 01:57:20.967245 2013 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:57:20.967333 kubelet[2013]: I1213 01:57:20.967322 2013 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:57:21.133214 kubelet[2013]: I1213 01:57:21.133096 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35b4ad3e5a2571cbcaaa32aea5b94b1b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"35b4ad3e5a2571cbcaaa32aea5b94b1b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:21.133214 kubelet[2013]: I1213 01:57:21.133126 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35b4ad3e5a2571cbcaaa32aea5b94b1b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"35b4ad3e5a2571cbcaaa32aea5b94b1b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:21.133214 kubelet[2013]: I1213 01:57:21.133143 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:21.133214 kubelet[2013]: I1213 01:57:21.133156 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:21.133214 kubelet[2013]: I1213 01:57:21.133170 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:21.133435 kubelet[2013]: I1213 01:57:21.133186 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:21.133435 kubelet[2013]: I1213 01:57:21.133215 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:57:21.133435 kubelet[2013]: I1213 01:57:21.133231 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35b4ad3e5a2571cbcaaa32aea5b94b1b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"35b4ad3e5a2571cbcaaa32aea5b94b1b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:21.133435 kubelet[2013]: I1213 01:57:21.133245 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:21.237901 sudo[2047]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:57:21.238083 sudo[2047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 01:57:21.266419 kubelet[2013]: E1213 01:57:21.266383 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:21.267173 kubelet[2013]: E1213 01:57:21.267145 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:21.267334 kubelet[2013]: E1213 01:57:21.267311 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:21.731586 sudo[2047]: pam_unix(sudo:session): session closed for user root Dec 13 01:57:21.826670 kubelet[2013]: I1213 01:57:21.826618 2013 apiserver.go:52] "Watching apiserver" Dec 13 01:57:21.830726 kubelet[2013]: I1213 01:57:21.830707 2013 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:57:21.860662 kubelet[2013]: E1213 01:57:21.860620 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:21.861434 kubelet[2013]: E1213 01:57:21.861410 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:21.862653 kubelet[2013]: E1213 01:57:21.862182 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:22.239851 kubelet[2013]: I1213 01:57:22.239763 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.239738092 podStartE2EDuration="2.239738092s" podCreationTimestamp="2024-12-13 01:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:21.93081822 +0000 UTC m=+1.147351963" watchObservedRunningTime="2024-12-13 01:57:22.239738092 +0000 UTC m=+1.456271835" Dec 13 01:57:22.240071 kubelet[2013]: I1213 01:57:22.239909 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.239904438 podStartE2EDuration="2.239904438s" podCreationTimestamp="2024-12-13 01:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:22.238164786 +0000 UTC m=+1.454698529" watchObservedRunningTime="2024-12-13 01:57:22.239904438 +0000 UTC m=+1.456438181" Dec 13 01:57:22.246208 kubelet[2013]: I1213 01:57:22.246156 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.24614042 podStartE2EDuration="2.24614042s" podCreationTimestamp="2024-12-13 01:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:22.245924049 +0000 UTC m=+1.462457792" watchObservedRunningTime="2024-12-13 01:57:22.24614042 +0000 UTC m=+1.462674164" Dec 13 01:57:22.861337 kubelet[2013]: E1213 01:57:22.861305 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:22.861748 kubelet[2013]: E1213 01:57:22.861729 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:22.954318 sudo[1310]: pam_unix(sudo:session): session closed for user root Dec 13 01:57:22.955508 sshd[1307]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:22.957524 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:53700.service: Deactivated successfully. Dec 13 01:57:22.958240 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:57:22.958365 systemd[1]: session-5.scope: Consumed 3.934s CPU time. Dec 13 01:57:22.958822 systemd-logind[1198]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:57:22.959462 systemd-logind[1198]: Removed session 5. Dec 13 01:57:24.136683 update_engine[1200]: I1213 01:57:24.136648 1200 update_attempter.cc:509] Updating boot flags... Dec 13 01:57:25.271596 kubelet[2013]: E1213 01:57:25.271554 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:27.961686 kubelet[2013]: E1213 01:57:27.961633 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:28.869159 kubelet[2013]: E1213 01:57:28.869118 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:32.601665 kubelet[2013]: E1213 01:57:32.601618 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:35.278068 kubelet[2013]: E1213 01:57:35.278003 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:36.370951 kubelet[2013]: I1213 01:57:36.370921 2013 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:57:36.371345 env[1209]: time="2024-12-13T01:57:36.371261381Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:57:36.371612 kubelet[2013]: I1213 01:57:36.371404 2013 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:57:37.244172 kubelet[2013]: I1213 01:57:37.244132 2013 topology_manager.go:215] "Topology Admit Handler" podUID="3836eb3e-7a3c-40f5-9b34-e8b8f8f6413b" podNamespace="kube-system" podName="kube-proxy-ks928" Dec 13 01:57:37.245935 kubelet[2013]: I1213 01:57:37.245901 2013 topology_manager.go:215] "Topology Admit Handler" podUID="81aaebab-148a-4727-b718-cc60d72f5b60" podNamespace="kube-system" podName="cilium-rz47j" Dec 13 01:57:37.250065 systemd[1]: Created slice kubepods-besteffort-pod3836eb3e_7a3c_40f5_9b34_e8b8f8f6413b.slice. Dec 13 01:57:37.253259 kubelet[2013]: W1213 01:57:37.253222 2013 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:57:37.253259 kubelet[2013]: E1213 01:57:37.253257 2013 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:57:37.253453 kubelet[2013]: W1213 01:57:37.253287 2013 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:57:37.253453 kubelet[2013]: E1213 01:57:37.253295 2013 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:57:37.253453 kubelet[2013]: W1213 01:57:37.253315 2013 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:57:37.253453 kubelet[2013]: E1213 01:57:37.253322 2013 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:57:37.259503 systemd[1]: Created slice kubepods-burstable-pod81aaebab_148a_4727_b718_cc60d72f5b60.slice. Dec 13 01:57:37.343607 kubelet[2013]: I1213 01:57:37.343561 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3836eb3e-7a3c-40f5-9b34-e8b8f8f6413b-lib-modules\") pod \"kube-proxy-ks928\" (UID: \"3836eb3e-7a3c-40f5-9b34-e8b8f8f6413b\") " pod="kube-system/kube-proxy-ks928" Dec 13 01:57:37.343607 kubelet[2013]: I1213 01:57:37.343593 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-hostproc\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.343607 kubelet[2013]: I1213 01:57:37.343608 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-lib-modules\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.343885 kubelet[2013]: I1213 01:57:37.343620 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-xtables-lock\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.343885 kubelet[2013]: I1213 01:57:37.343698 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3836eb3e-7a3c-40f5-9b34-e8b8f8f6413b-xtables-lock\") pod \"kube-proxy-ks928\" (UID: \"3836eb3e-7a3c-40f5-9b34-e8b8f8f6413b\") " pod="kube-system/kube-proxy-ks928" Dec 13 01:57:37.343885 kubelet[2013]: I1213 01:57:37.343713 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-cgroup\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.343885 kubelet[2013]: I1213 01:57:37.343727 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-etc-cni-netd\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.343885 kubelet[2013]: I1213 01:57:37.343756 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81aaebab-148a-4727-b718-cc60d72f5b60-hubble-tls\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.343885 kubelet[2013]: I1213 01:57:37.343788 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t59lv\" (UniqueName: \"kubernetes.io/projected/3836eb3e-7a3c-40f5-9b34-e8b8f8f6413b-kube-api-access-t59lv\") pod \"kube-proxy-ks928\" (UID: \"3836eb3e-7a3c-40f5-9b34-e8b8f8f6413b\") " pod="kube-system/kube-proxy-ks928" Dec 13 01:57:37.344077 kubelet[2013]: I1213 01:57:37.343803 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81aaebab-148a-4727-b718-cc60d72f5b60-clustermesh-secrets\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.344077 kubelet[2013]: I1213 01:57:37.343818 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-run\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.344077 kubelet[2013]: I1213 01:57:37.343830 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-bpf-maps\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.344077 kubelet[2013]: I1213 01:57:37.343842 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-host-proc-sys-net\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.344077 kubelet[2013]: I1213 01:57:37.343866 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxcxb\" (UniqueName: \"kubernetes.io/projected/81aaebab-148a-4727-b718-cc60d72f5b60-kube-api-access-mxcxb\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.344077 kubelet[2013]: I1213 01:57:37.343879 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3836eb3e-7a3c-40f5-9b34-e8b8f8f6413b-kube-proxy\") pod \"kube-proxy-ks928\" (UID: \"3836eb3e-7a3c-40f5-9b34-e8b8f8f6413b\") " pod="kube-system/kube-proxy-ks928" Dec 13 01:57:37.344327 kubelet[2013]: I1213 01:57:37.343893 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cni-path\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.344327 kubelet[2013]: I1213 01:57:37.343904 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-config-path\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.344327 kubelet[2013]: I1213 01:57:37.343919 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-host-proc-sys-kernel\") pod \"cilium-rz47j\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " pod="kube-system/cilium-rz47j" Dec 13 01:57:37.469361 kubelet[2013]: I1213 01:57:37.469317 2013 topology_manager.go:215] "Topology Admit Handler" podUID="9ccf0f7a-b588-4567-bca8-eace7a988482" podNamespace="kube-system" podName="cilium-operator-599987898-6l9jn" Dec 13 01:57:37.475242 systemd[1]: Created slice kubepods-besteffort-pod9ccf0f7a_b588_4567_bca8_eace7a988482.slice. Dec 13 01:57:37.545021 kubelet[2013]: I1213 01:57:37.544901 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ccf0f7a-b588-4567-bca8-eace7a988482-cilium-config-path\") pod \"cilium-operator-599987898-6l9jn\" (UID: \"9ccf0f7a-b588-4567-bca8-eace7a988482\") " pod="kube-system/cilium-operator-599987898-6l9jn" Dec 13 01:57:37.545021 kubelet[2013]: I1213 01:57:37.544954 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqfhl\" (UniqueName: \"kubernetes.io/projected/9ccf0f7a-b588-4567-bca8-eace7a988482-kube-api-access-nqfhl\") pod \"cilium-operator-599987898-6l9jn\" (UID: \"9ccf0f7a-b588-4567-bca8-eace7a988482\") " pod="kube-system/cilium-operator-599987898-6l9jn" Dec 13 01:57:37.557448 kubelet[2013]: E1213 01:57:37.557420 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:37.558001 env[1209]: time="2024-12-13T01:57:37.557959367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ks928,Uid:3836eb3e-7a3c-40f5-9b34-e8b8f8f6413b,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:37.575732 env[1209]: time="2024-12-13T01:57:37.575670117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:37.575732 env[1209]: time="2024-12-13T01:57:37.575707998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:37.575919 env[1209]: time="2024-12-13T01:57:37.575725050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:37.575919 env[1209]: time="2024-12-13T01:57:37.575841620Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a0ce3210a2d79b7cc9e7fab1370a52d4257a75c728192ce8071c563f253093e pid=2118 runtime=io.containerd.runc.v2 Dec 13 01:57:37.589889 systemd[1]: Started cri-containerd-9a0ce3210a2d79b7cc9e7fab1370a52d4257a75c728192ce8071c563f253093e.scope. Dec 13 01:57:37.608714 env[1209]: time="2024-12-13T01:57:37.608670168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ks928,Uid:3836eb3e-7a3c-40f5-9b34-e8b8f8f6413b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a0ce3210a2d79b7cc9e7fab1370a52d4257a75c728192ce8071c563f253093e\"" Dec 13 01:57:37.610461 kubelet[2013]: E1213 01:57:37.609438 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:37.612297 env[1209]: time="2024-12-13T01:57:37.612259699Z" level=info msg="CreateContainer within sandbox \"9a0ce3210a2d79b7cc9e7fab1370a52d4257a75c728192ce8071c563f253093e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:57:37.626436 env[1209]: time="2024-12-13T01:57:37.626394113Z" level=info msg="CreateContainer within sandbox \"9a0ce3210a2d79b7cc9e7fab1370a52d4257a75c728192ce8071c563f253093e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"578f2a4507d3f31b00d36a130cf90feed2df693fe98f9a3893915cf044e9c7f5\"" Dec 13 01:57:37.626824 env[1209]: time="2024-12-13T01:57:37.626796993Z" level=info msg="StartContainer for \"578f2a4507d3f31b00d36a130cf90feed2df693fe98f9a3893915cf044e9c7f5\"" Dec 13 01:57:37.640425 systemd[1]: Started cri-containerd-578f2a4507d3f31b00d36a130cf90feed2df693fe98f9a3893915cf044e9c7f5.scope. Dec 13 01:57:37.666903 env[1209]: time="2024-12-13T01:57:37.666834961Z" level=info msg="StartContainer for \"578f2a4507d3f31b00d36a130cf90feed2df693fe98f9a3893915cf044e9c7f5\" returns successfully" Dec 13 01:57:37.883748 kubelet[2013]: E1213 01:57:37.883652 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:37.892881 kubelet[2013]: I1213 01:57:37.892521 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ks928" podStartSLOduration=1.892500891 podStartE2EDuration="1.892500891s" podCreationTimestamp="2024-12-13 01:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:37.892093863 +0000 UTC m=+17.108627606" watchObservedRunningTime="2024-12-13 01:57:37.892500891 +0000 UTC m=+17.109034634" Dec 13 01:57:38.378533 kubelet[2013]: E1213 01:57:38.378490 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:38.378968 env[1209]: time="2024-12-13T01:57:38.378901581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6l9jn,Uid:9ccf0f7a-b588-4567-bca8-eace7a988482,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:38.445881 kubelet[2013]: E1213 01:57:38.445847 2013 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 01:57:38.445881 kubelet[2013]: E1213 01:57:38.445870 2013 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-rz47j: failed to sync secret cache: timed out waiting for the condition Dec 13 01:57:38.445993 kubelet[2013]: E1213 01:57:38.445926 2013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81aaebab-148a-4727-b718-cc60d72f5b60-hubble-tls podName:81aaebab-148a-4727-b718-cc60d72f5b60 nodeName:}" failed. No retries permitted until 2024-12-13 01:57:38.94590824 +0000 UTC m=+18.162441983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/81aaebab-148a-4727-b718-cc60d72f5b60-hubble-tls") pod "cilium-rz47j" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60") : failed to sync secret cache: timed out waiting for the condition Dec 13 01:57:38.456567 systemd[1]: run-containerd-runc-k8s.io-9a0ce3210a2d79b7cc9e7fab1370a52d4257a75c728192ce8071c563f253093e-runc.Un3E8m.mount: Deactivated successfully. Dec 13 01:57:38.521302 env[1209]: time="2024-12-13T01:57:38.520619947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:38.521302 env[1209]: time="2024-12-13T01:57:38.520670331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:38.521302 env[1209]: time="2024-12-13T01:57:38.520680841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:38.521782 env[1209]: time="2024-12-13T01:57:38.521170945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5 pid=2317 runtime=io.containerd.runc.v2 Dec 13 01:57:38.536535 systemd[1]: Started cri-containerd-0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5.scope. Dec 13 01:57:38.570615 env[1209]: time="2024-12-13T01:57:38.570578746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6l9jn,Uid:9ccf0f7a-b588-4567-bca8-eace7a988482,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5\"" Dec 13 01:57:38.571655 kubelet[2013]: E1213 01:57:38.571428 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:38.573128 env[1209]: time="2024-12-13T01:57:38.573089471Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:57:39.062013 kubelet[2013]: E1213 01:57:39.061973 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:39.062342 env[1209]: time="2024-12-13T01:57:39.062304159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rz47j,Uid:81aaebab-148a-4727-b718-cc60d72f5b60,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:39.085955 env[1209]: time="2024-12-13T01:57:39.085887626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:39.085955 env[1209]: time="2024-12-13T01:57:39.085934003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:39.085955 env[1209]: time="2024-12-13T01:57:39.085948591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:39.086187 env[1209]: time="2024-12-13T01:57:39.086102822Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a pid=2358 runtime=io.containerd.runc.v2 Dec 13 01:57:39.094943 systemd[1]: Started cri-containerd-4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a.scope. Dec 13 01:57:39.118234 env[1209]: time="2024-12-13T01:57:39.118180952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rz47j,Uid:81aaebab-148a-4727-b718-cc60d72f5b60,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\"" Dec 13 01:57:39.118829 kubelet[2013]: E1213 01:57:39.118802 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:39.456571 systemd[1]: run-containerd-runc-k8s.io-0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5-runc.D69fCu.mount: Deactivated successfully. Dec 13 01:57:41.823318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2306123036.mount: Deactivated successfully. Dec 13 01:57:42.726759 env[1209]: time="2024-12-13T01:57:42.726702312Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:42.728459 env[1209]: time="2024-12-13T01:57:42.728428534Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:42.730032 env[1209]: time="2024-12-13T01:57:42.730007527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:42.730362 env[1209]: time="2024-12-13T01:57:42.730318223Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:57:42.731436 env[1209]: time="2024-12-13T01:57:42.731398978Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:57:42.732447 env[1209]: time="2024-12-13T01:57:42.732421234Z" level=info msg="CreateContainer within sandbox \"0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:57:42.746440 env[1209]: time="2024-12-13T01:57:42.746386256Z" level=info msg="CreateContainer within sandbox \"0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\"" Dec 13 01:57:42.746997 env[1209]: time="2024-12-13T01:57:42.746964496Z" level=info msg="StartContainer for \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\"" Dec 13 01:57:42.760335 systemd[1]: Started cri-containerd-de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2.scope. Dec 13 01:57:42.782441 env[1209]: time="2024-12-13T01:57:42.782387483Z" level=info msg="StartContainer for \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\" returns successfully" Dec 13 01:57:42.893955 kubelet[2013]: E1213 01:57:42.893917 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:43.895209 kubelet[2013]: E1213 01:57:43.895169 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:46.506094 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:48734.service. Dec 13 01:57:46.574820 sshd[2432]: Accepted publickey for core from 10.0.0.1 port 48734 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:57:46.575969 sshd[2432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:46.580914 systemd[1]: Started session-6.scope. Dec 13 01:57:46.581401 systemd-logind[1198]: New session 6 of user core. Dec 13 01:57:46.698754 sshd[2432]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:46.701373 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:48734.service: Deactivated successfully. Dec 13 01:57:46.702120 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:57:46.702799 systemd-logind[1198]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:57:46.703482 systemd-logind[1198]: Removed session 6. Dec 13 01:57:48.264904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378162969.mount: Deactivated successfully. Dec 13 01:57:51.702423 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:48740.service. Dec 13 01:57:51.915929 env[1209]: time="2024-12-13T01:57:51.915866451Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:51.917955 env[1209]: time="2024-12-13T01:57:51.917925662Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:51.919579 env[1209]: time="2024-12-13T01:57:51.919542071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:51.920182 env[1209]: time="2024-12-13T01:57:51.920155234Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:57:51.922034 env[1209]: time="2024-12-13T01:57:51.922002657Z" level=info msg="CreateContainer within sandbox \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:57:51.926250 sshd[2447]: Accepted publickey for core from 10.0.0.1 port 48740 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:57:51.927797 sshd[2447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:51.934265 env[1209]: time="2024-12-13T01:57:51.934215989Z" level=info msg="CreateContainer within sandbox \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040\"" Dec 13 01:57:51.935029 systemd-logind[1198]: New session 7 of user core. Dec 13 01:57:51.936008 env[1209]: time="2024-12-13T01:57:51.935966279Z" level=info msg="StartContainer for \"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040\"" Dec 13 01:57:51.936122 systemd[1]: Started session-7.scope. Dec 13 01:57:51.952291 systemd[1]: Started cri-containerd-11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040.scope. Dec 13 01:57:51.976844 env[1209]: time="2024-12-13T01:57:51.976725098Z" level=info msg="StartContainer for \"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040\" returns successfully" Dec 13 01:57:51.985945 systemd[1]: cri-containerd-11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040.scope: Deactivated successfully. Dec 13 01:57:52.197920 sshd[2447]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:52.200071 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:48740.service: Deactivated successfully. Dec 13 01:57:52.200793 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:57:52.201305 systemd-logind[1198]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:57:52.202156 systemd-logind[1198]: Removed session 7. Dec 13 01:57:52.885092 env[1209]: time="2024-12-13T01:57:52.885028363Z" level=info msg="shim disconnected" id=11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040 Dec 13 01:57:52.885092 env[1209]: time="2024-12-13T01:57:52.885077165Z" level=warning msg="cleaning up after shim disconnected" id=11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040 namespace=k8s.io Dec 13 01:57:52.885092 env[1209]: time="2024-12-13T01:57:52.885087024Z" level=info msg="cleaning up dead shim" Dec 13 01:57:52.891245 env[1209]: time="2024-12-13T01:57:52.891184846Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:57:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2510 runtime=io.containerd.runc.v2\n" Dec 13 01:57:52.908728 kubelet[2013]: E1213 01:57:52.908705 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:52.910932 env[1209]: time="2024-12-13T01:57:52.910861284Z" level=info msg="CreateContainer within sandbox \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:57:52.923838 kubelet[2013]: I1213 01:57:52.923780 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-6l9jn" podStartSLOduration=11.765086666 podStartE2EDuration="15.923761453s" podCreationTimestamp="2024-12-13 01:57:37 +0000 UTC" firstStartedPulling="2024-12-13 01:57:38.572599066 +0000 UTC m=+17.789132799" lastFinishedPulling="2024-12-13 01:57:42.731273843 +0000 UTC m=+21.947807586" observedRunningTime="2024-12-13 01:57:42.91223138 +0000 UTC m=+22.128765123" watchObservedRunningTime="2024-12-13 01:57:52.923761453 +0000 UTC m=+32.140295226" Dec 13 01:57:52.924993 env[1209]: time="2024-12-13T01:57:52.924944226Z" level=info msg="CreateContainer within sandbox \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586\"" Dec 13 01:57:52.925370 env[1209]: time="2024-12-13T01:57:52.925348366Z" level=info msg="StartContainer for \"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586\"" Dec 13 01:57:52.930889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040-rootfs.mount: Deactivated successfully. Dec 13 01:57:52.939666 systemd[1]: run-containerd-runc-k8s.io-d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586-runc.qhs0F0.mount: Deactivated successfully. Dec 13 01:57:52.943708 systemd[1]: Started cri-containerd-d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586.scope. Dec 13 01:57:52.964060 env[1209]: time="2024-12-13T01:57:52.963999351Z" level=info msg="StartContainer for \"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586\" returns successfully" Dec 13 01:57:52.972343 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:57:52.972550 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:57:52.972736 systemd[1]: Stopping systemd-sysctl.service... Dec 13 01:57:52.973969 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:57:52.976086 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:57:52.976759 systemd[1]: cri-containerd-d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586.scope: Deactivated successfully. Dec 13 01:57:52.983825 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:57:52.996607 env[1209]: time="2024-12-13T01:57:52.996558505Z" level=info msg="shim disconnected" id=d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586 Dec 13 01:57:52.996773 env[1209]: time="2024-12-13T01:57:52.996609201Z" level=warning msg="cleaning up after shim disconnected" id=d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586 namespace=k8s.io Dec 13 01:57:52.996773 env[1209]: time="2024-12-13T01:57:52.996624449Z" level=info msg="cleaning up dead shim" Dec 13 01:57:53.002891 env[1209]: time="2024-12-13T01:57:53.002849711Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:57:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2572 runtime=io.containerd.runc.v2\n" Dec 13 01:57:53.911794 kubelet[2013]: E1213 01:57:53.911765 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:53.914430 env[1209]: time="2024-12-13T01:57:53.914382109Z" level=info msg="CreateContainer within sandbox \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:57:53.930899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586-rootfs.mount: Deactivated successfully. Dec 13 01:57:53.934836 env[1209]: time="2024-12-13T01:57:53.934790417Z" level=info msg="CreateContainer within sandbox \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd\"" Dec 13 01:57:53.935667 env[1209]: time="2024-12-13T01:57:53.935307038Z" level=info msg="StartContainer for \"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd\"" Dec 13 01:57:53.952389 systemd[1]: run-containerd-runc-k8s.io-edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd-runc.w3XIvD.mount: Deactivated successfully. Dec 13 01:57:53.953628 systemd[1]: Started cri-containerd-edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd.scope. Dec 13 01:57:53.976881 env[1209]: time="2024-12-13T01:57:53.976449396Z" level=info msg="StartContainer for \"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd\" returns successfully" Dec 13 01:57:53.977814 systemd[1]: cri-containerd-edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd.scope: Deactivated successfully. Dec 13 01:57:53.994167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd-rootfs.mount: Deactivated successfully. Dec 13 01:57:53.999198 env[1209]: time="2024-12-13T01:57:53.999134182Z" level=info msg="shim disconnected" id=edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd Dec 13 01:57:53.999198 env[1209]: time="2024-12-13T01:57:53.999191901Z" level=warning msg="cleaning up after shim disconnected" id=edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd namespace=k8s.io Dec 13 01:57:53.999198 env[1209]: time="2024-12-13T01:57:53.999203192Z" level=info msg="cleaning up dead shim" Dec 13 01:57:54.005482 env[1209]: time="2024-12-13T01:57:54.005440945Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:57:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2628 runtime=io.containerd.runc.v2\n" Dec 13 01:57:54.915058 kubelet[2013]: E1213 01:57:54.915026 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:54.919028 env[1209]: time="2024-12-13T01:57:54.918984555Z" level=info msg="CreateContainer within sandbox \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:57:54.934764 env[1209]: time="2024-12-13T01:57:54.934720507Z" level=info msg="CreateContainer within sandbox \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235\"" Dec 13 01:57:54.935254 env[1209]: time="2024-12-13T01:57:54.935228873Z" level=info msg="StartContainer for \"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235\"" Dec 13 01:57:54.949023 systemd[1]: Started cri-containerd-cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235.scope. Dec 13 01:57:54.968989 systemd[1]: cri-containerd-cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235.scope: Deactivated successfully. Dec 13 01:57:54.969807 env[1209]: time="2024-12-13T01:57:54.969764495Z" level=info msg="StartContainer for \"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235\" returns successfully" Dec 13 01:57:54.983995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235-rootfs.mount: Deactivated successfully. Dec 13 01:57:54.988685 env[1209]: time="2024-12-13T01:57:54.988643043Z" level=info msg="shim disconnected" id=cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235 Dec 13 01:57:54.988685 env[1209]: time="2024-12-13T01:57:54.988681826Z" level=warning msg="cleaning up after shim disconnected" id=cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235 namespace=k8s.io Dec 13 01:57:54.988779 env[1209]: time="2024-12-13T01:57:54.988689741Z" level=info msg="cleaning up dead shim" Dec 13 01:57:54.994553 env[1209]: time="2024-12-13T01:57:54.994514647Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:57:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2684 runtime=io.containerd.runc.v2\n" Dec 13 01:57:55.919899 kubelet[2013]: E1213 01:57:55.919868 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:55.922837 env[1209]: time="2024-12-13T01:57:55.922043755Z" level=info msg="CreateContainer within sandbox \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:57:56.106761 env[1209]: time="2024-12-13T01:57:56.106671099Z" level=info msg="CreateContainer within sandbox \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\"" Dec 13 01:57:56.107357 env[1209]: time="2024-12-13T01:57:56.107326060Z" level=info msg="StartContainer for \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\"" Dec 13 01:57:56.124051 systemd[1]: Started cri-containerd-47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e.scope. Dec 13 01:57:56.144730 env[1209]: time="2024-12-13T01:57:56.144656549Z" level=info msg="StartContainer for \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\" returns successfully" Dec 13 01:57:56.298773 kubelet[2013]: I1213 01:57:56.298687 2013 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:57:56.321189 kubelet[2013]: I1213 01:57:56.321135 2013 topology_manager.go:215] "Topology Admit Handler" podUID="cf8f4c6b-d6bc-48d8-afaf-5de73ce47c44" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c5pnh" Dec 13 01:57:56.326837 systemd[1]: Created slice kubepods-burstable-podcf8f4c6b_d6bc_48d8_afaf_5de73ce47c44.slice. Dec 13 01:57:56.331438 kubelet[2013]: I1213 01:57:56.331414 2013 topology_manager.go:215] "Topology Admit Handler" podUID="9894b5e2-606d-4da2-a651-755a607c9114" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6qz5v" Dec 13 01:57:56.336712 systemd[1]: Created slice kubepods-burstable-pod9894b5e2_606d_4da2_a651_755a607c9114.slice. Dec 13 01:57:56.468767 kubelet[2013]: I1213 01:57:56.468726 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9894b5e2-606d-4da2-a651-755a607c9114-config-volume\") pod \"coredns-7db6d8ff4d-6qz5v\" (UID: \"9894b5e2-606d-4da2-a651-755a607c9114\") " pod="kube-system/coredns-7db6d8ff4d-6qz5v" Dec 13 01:57:56.468995 kubelet[2013]: I1213 01:57:56.468975 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf8f4c6b-d6bc-48d8-afaf-5de73ce47c44-config-volume\") pod \"coredns-7db6d8ff4d-c5pnh\" (UID: \"cf8f4c6b-d6bc-48d8-afaf-5de73ce47c44\") " pod="kube-system/coredns-7db6d8ff4d-c5pnh" Dec 13 01:57:56.469135 kubelet[2013]: I1213 01:57:56.469090 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvm6f\" (UniqueName: \"kubernetes.io/projected/cf8f4c6b-d6bc-48d8-afaf-5de73ce47c44-kube-api-access-qvm6f\") pod \"coredns-7db6d8ff4d-c5pnh\" (UID: \"cf8f4c6b-d6bc-48d8-afaf-5de73ce47c44\") " pod="kube-system/coredns-7db6d8ff4d-c5pnh" Dec 13 01:57:56.469135 kubelet[2013]: I1213 01:57:56.469133 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9x27\" (UniqueName: \"kubernetes.io/projected/9894b5e2-606d-4da2-a651-755a607c9114-kube-api-access-t9x27\") pod \"coredns-7db6d8ff4d-6qz5v\" (UID: \"9894b5e2-606d-4da2-a651-755a607c9114\") " pod="kube-system/coredns-7db6d8ff4d-6qz5v" Dec 13 01:57:56.925162 kubelet[2013]: E1213 01:57:56.925119 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:56.930614 kubelet[2013]: E1213 01:57:56.930581 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:56.931225 env[1209]: time="2024-12-13T01:57:56.931173239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c5pnh,Uid:cf8f4c6b-d6bc-48d8-afaf-5de73ce47c44,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:56.939570 kubelet[2013]: E1213 01:57:56.939531 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:56.940075 env[1209]: time="2024-12-13T01:57:56.940024508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6qz5v,Uid:9894b5e2-606d-4da2-a651-755a607c9114,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:57.202304 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:56776.service. Dec 13 01:57:57.266122 sshd[2833]: Accepted publickey for core from 10.0.0.1 port 56776 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:57:57.267190 sshd[2833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:57.270417 systemd-logind[1198]: New session 8 of user core. Dec 13 01:57:57.271149 systemd[1]: Started session-8.scope. Dec 13 01:57:57.434678 sshd[2833]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:57.437021 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:56776.service: Deactivated successfully. Dec 13 01:57:57.437672 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:57:57.438479 systemd-logind[1198]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:57:57.439160 systemd-logind[1198]: Removed session 8. Dec 13 01:57:57.926490 kubelet[2013]: E1213 01:57:57.926378 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:58.096558 systemd-networkd[1032]: cilium_host: Link UP Dec 13 01:57:58.096711 systemd-networkd[1032]: cilium_net: Link UP Dec 13 01:57:58.096714 systemd-networkd[1032]: cilium_net: Gained carrier Dec 13 01:57:58.098143 systemd-networkd[1032]: cilium_host: Gained carrier Dec 13 01:57:58.098668 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 01:57:58.171334 systemd-networkd[1032]: cilium_vxlan: Link UP Dec 13 01:57:58.171345 systemd-networkd[1032]: cilium_vxlan: Gained carrier Dec 13 01:57:58.351680 kernel: NET: Registered PF_ALG protocol family Dec 13 01:57:58.735776 systemd-networkd[1032]: cilium_net: Gained IPv6LL Dec 13 01:57:58.736023 systemd-networkd[1032]: cilium_host: Gained IPv6LL Dec 13 01:57:58.873188 systemd-networkd[1032]: lxc_health: Link UP Dec 13 01:57:58.882421 systemd-networkd[1032]: lxc_health: Gained carrier Dec 13 01:57:58.882842 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 01:57:58.927860 kubelet[2013]: E1213 01:57:58.927813 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:59.084490 kubelet[2013]: I1213 01:57:59.083018 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rz47j" podStartSLOduration=9.287730326 podStartE2EDuration="22.083000686s" podCreationTimestamp="2024-12-13 01:57:37 +0000 UTC" firstStartedPulling="2024-12-13 01:57:39.125556417 +0000 UTC m=+18.342090160" lastFinishedPulling="2024-12-13 01:57:51.920826777 +0000 UTC m=+31.137360520" observedRunningTime="2024-12-13 01:57:57.011398879 +0000 UTC m=+36.227932642" watchObservedRunningTime="2024-12-13 01:57:59.083000686 +0000 UTC m=+38.299534429" Dec 13 01:57:59.131268 systemd-networkd[1032]: lxc6852a6f35eb9: Link UP Dec 13 01:57:59.138970 systemd-networkd[1032]: lxc09abfa5d133b: Link UP Dec 13 01:57:59.145676 kernel: eth0: renamed from tmp46a7b Dec 13 01:57:59.152727 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:57:59.152771 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6852a6f35eb9: link becomes ready Dec 13 01:57:59.152818 systemd-networkd[1032]: lxc6852a6f35eb9: Gained carrier Dec 13 01:57:59.153963 kernel: eth0: renamed from tmp0418b Dec 13 01:57:59.164251 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc09abfa5d133b: link becomes ready Dec 13 01:57:59.164977 systemd-networkd[1032]: lxc09abfa5d133b: Gained carrier Dec 13 01:57:59.634870 systemd-networkd[1032]: cilium_vxlan: Gained IPv6LL Dec 13 01:57:59.928926 kubelet[2013]: E1213 01:57:59.928902 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:00.082062 systemd-networkd[1032]: lxc_health: Gained IPv6LL Dec 13 01:58:00.463781 systemd-networkd[1032]: lxc6852a6f35eb9: Gained IPv6LL Dec 13 01:58:00.930071 kubelet[2013]: E1213 01:58:00.930042 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:01.167883 systemd-networkd[1032]: lxc09abfa5d133b: Gained IPv6LL Dec 13 01:58:01.931606 kubelet[2013]: E1213 01:58:01.931560 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:02.295207 env[1209]: time="2024-12-13T01:58:02.295074390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:02.295207 env[1209]: time="2024-12-13T01:58:02.295136446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:02.295207 env[1209]: time="2024-12-13T01:58:02.295156915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:02.295787 env[1209]: time="2024-12-13T01:58:02.295737886Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46a7b3dbfd8ba5e46af38020949b678bdf569101551d20446c0eb37790a984bc pid=3269 runtime=io.containerd.runc.v2 Dec 13 01:58:02.302193 env[1209]: time="2024-12-13T01:58:02.302136171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:02.302193 env[1209]: time="2024-12-13T01:58:02.302173932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:02.302193 env[1209]: time="2024-12-13T01:58:02.302184071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:02.302440 env[1209]: time="2024-12-13T01:58:02.302320797Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0418b5aed7dc9e21e64593fb8865a87ff974a7283efe738e608b573c62679caf pid=3290 runtime=io.containerd.runc.v2 Dec 13 01:58:02.309260 systemd[1]: Started cri-containerd-46a7b3dbfd8ba5e46af38020949b678bdf569101551d20446c0eb37790a984bc.scope. Dec 13 01:58:02.310725 systemd[1]: run-containerd-runc-k8s.io-46a7b3dbfd8ba5e46af38020949b678bdf569101551d20446c0eb37790a984bc-runc.uE9QmO.mount: Deactivated successfully. Dec 13 01:58:02.316909 systemd[1]: Started cri-containerd-0418b5aed7dc9e21e64593fb8865a87ff974a7283efe738e608b573c62679caf.scope. Dec 13 01:58:02.324457 systemd-resolved[1150]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:58:02.331134 systemd-resolved[1150]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:58:02.348772 env[1209]: time="2024-12-13T01:58:02.348724095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c5pnh,Uid:cf8f4c6b-d6bc-48d8-afaf-5de73ce47c44,Namespace:kube-system,Attempt:0,} returns sandbox id \"46a7b3dbfd8ba5e46af38020949b678bdf569101551d20446c0eb37790a984bc\"" Dec 13 01:58:02.349481 kubelet[2013]: E1213 01:58:02.349449 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:02.352571 env[1209]: time="2024-12-13T01:58:02.352539882Z" level=info msg="CreateContainer within sandbox \"46a7b3dbfd8ba5e46af38020949b678bdf569101551d20446c0eb37790a984bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:58:02.355897 env[1209]: time="2024-12-13T01:58:02.355667654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6qz5v,Uid:9894b5e2-606d-4da2-a651-755a607c9114,Namespace:kube-system,Attempt:0,} returns sandbox id \"0418b5aed7dc9e21e64593fb8865a87ff974a7283efe738e608b573c62679caf\"" Dec 13 01:58:02.356517 kubelet[2013]: E1213 01:58:02.356313 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:02.358098 env[1209]: time="2024-12-13T01:58:02.358069765Z" level=info msg="CreateContainer within sandbox \"0418b5aed7dc9e21e64593fb8865a87ff974a7283efe738e608b573c62679caf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:58:02.376551 env[1209]: time="2024-12-13T01:58:02.376482963Z" level=info msg="CreateContainer within sandbox \"46a7b3dbfd8ba5e46af38020949b678bdf569101551d20446c0eb37790a984bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52849449e2f2c469454f48e0e00cc17b86a1c467f0ad60219385e89bc592ce98\"" Dec 13 01:58:02.377155 env[1209]: time="2024-12-13T01:58:02.377116903Z" level=info msg="StartContainer for \"52849449e2f2c469454f48e0e00cc17b86a1c467f0ad60219385e89bc592ce98\"" Dec 13 01:58:02.377578 env[1209]: time="2024-12-13T01:58:02.377546209Z" level=info msg="CreateContainer within sandbox \"0418b5aed7dc9e21e64593fb8865a87ff974a7283efe738e608b573c62679caf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42f633f87571edb9fe7fc932894c58a80e9336b2898a09f8d3cb5ba98fa34806\"" Dec 13 01:58:02.378005 env[1209]: time="2024-12-13T01:58:02.377977730Z" level=info msg="StartContainer for \"42f633f87571edb9fe7fc932894c58a80e9336b2898a09f8d3cb5ba98fa34806\"" Dec 13 01:58:02.394763 systemd[1]: Started cri-containerd-52849449e2f2c469454f48e0e00cc17b86a1c467f0ad60219385e89bc592ce98.scope. Dec 13 01:58:02.399654 systemd[1]: Started cri-containerd-42f633f87571edb9fe7fc932894c58a80e9336b2898a09f8d3cb5ba98fa34806.scope. Dec 13 01:58:02.423047 env[1209]: time="2024-12-13T01:58:02.422989374Z" level=info msg="StartContainer for \"52849449e2f2c469454f48e0e00cc17b86a1c467f0ad60219385e89bc592ce98\" returns successfully" Dec 13 01:58:02.425367 env[1209]: time="2024-12-13T01:58:02.425302508Z" level=info msg="StartContainer for \"42f633f87571edb9fe7fc932894c58a80e9336b2898a09f8d3cb5ba98fa34806\" returns successfully" Dec 13 01:58:02.439614 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:56792.service. Dec 13 01:58:02.480082 sshd[3404]: Accepted publickey for core from 10.0.0.1 port 56792 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:02.481126 sshd[3404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:02.484871 systemd-logind[1198]: New session 9 of user core. Dec 13 01:58:02.485590 systemd[1]: Started session-9.scope. Dec 13 01:58:02.612027 sshd[3404]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:02.614566 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:56792.service: Deactivated successfully. Dec 13 01:58:02.615251 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:58:02.615740 systemd-logind[1198]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:58:02.616441 systemd-logind[1198]: Removed session 9. Dec 13 01:58:02.935028 kubelet[2013]: E1213 01:58:02.934275 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:02.936382 kubelet[2013]: E1213 01:58:02.936212 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:02.944254 kubelet[2013]: I1213 01:58:02.944189 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-c5pnh" podStartSLOduration=25.944173528 podStartE2EDuration="25.944173528s" podCreationTimestamp="2024-12-13 01:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:02.944025911 +0000 UTC m=+42.160559654" watchObservedRunningTime="2024-12-13 01:58:02.944173528 +0000 UTC m=+42.160707271" Dec 13 01:58:03.937958 kubelet[2013]: E1213 01:58:03.937922 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:03.938286 kubelet[2013]: E1213 01:58:03.937981 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:04.939773 kubelet[2013]: E1213 01:58:04.939743 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:04.940104 kubelet[2013]: E1213 01:58:04.939962 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:07.615250 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:45154.service. Dec 13 01:58:07.650443 sshd[3442]: Accepted publickey for core from 10.0.0.1 port 45154 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:07.651247 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:07.654124 systemd-logind[1198]: New session 10 of user core. Dec 13 01:58:07.654843 systemd[1]: Started session-10.scope. Dec 13 01:58:07.783000 sshd[3442]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:07.784856 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:45154.service: Deactivated successfully. Dec 13 01:58:07.785585 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:58:07.786072 systemd-logind[1198]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:58:07.786836 systemd-logind[1198]: Removed session 10. Dec 13 01:58:12.787352 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:45166.service. Dec 13 01:58:12.827135 sshd[3458]: Accepted publickey for core from 10.0.0.1 port 45166 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:12.828009 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:12.831539 systemd-logind[1198]: New session 11 of user core. Dec 13 01:58:12.832257 systemd[1]: Started session-11.scope. Dec 13 01:58:12.935119 sshd[3458]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:12.937984 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:45166.service: Deactivated successfully. Dec 13 01:58:12.938548 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:58:12.939110 systemd-logind[1198]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:58:12.940227 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:45182.service. Dec 13 01:58:12.941989 systemd-logind[1198]: Removed session 11. Dec 13 01:58:12.976699 sshd[3472]: Accepted publickey for core from 10.0.0.1 port 45182 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:12.977829 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:12.981155 systemd-logind[1198]: New session 12 of user core. Dec 13 01:58:12.981981 systemd[1]: Started session-12.scope. Dec 13 01:58:13.138360 sshd[3472]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:13.141779 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:45182.service: Deactivated successfully. Dec 13 01:58:13.142398 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:58:13.145533 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:45192.service. Dec 13 01:58:13.147416 systemd-logind[1198]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:58:13.148831 systemd-logind[1198]: Removed session 12. Dec 13 01:58:13.183723 sshd[3484]: Accepted publickey for core from 10.0.0.1 port 45192 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:13.184908 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:13.188353 systemd-logind[1198]: New session 13 of user core. Dec 13 01:58:13.189401 systemd[1]: Started session-13.scope. Dec 13 01:58:13.291112 sshd[3484]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:13.293675 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:45192.service: Deactivated successfully. Dec 13 01:58:13.294365 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:58:13.294867 systemd-logind[1198]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:58:13.295457 systemd-logind[1198]: Removed session 13. Dec 13 01:58:18.294570 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:40906.service. Dec 13 01:58:18.330277 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 40906 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:18.331375 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:18.334772 systemd-logind[1198]: New session 14 of user core. Dec 13 01:58:18.335455 systemd[1]: Started session-14.scope. Dec 13 01:58:18.435005 sshd[3497]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:18.437466 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:40906.service: Deactivated successfully. Dec 13 01:58:18.438253 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:58:18.438842 systemd-logind[1198]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:58:18.439419 systemd-logind[1198]: Removed session 14. Dec 13 01:58:23.439589 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:40916.service. Dec 13 01:58:23.477713 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 40916 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:23.478729 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:23.482089 systemd-logind[1198]: New session 15 of user core. Dec 13 01:58:23.482859 systemd[1]: Started session-15.scope. Dec 13 01:58:23.583974 sshd[3512]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:23.586783 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:40916.service: Deactivated successfully. Dec 13 01:58:23.587271 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:58:23.587984 systemd-logind[1198]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:58:23.589012 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:40920.service. Dec 13 01:58:23.589897 systemd-logind[1198]: Removed session 15. Dec 13 01:58:23.626651 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 40920 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:23.627662 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:23.630768 systemd-logind[1198]: New session 16 of user core. Dec 13 01:58:23.631427 systemd[1]: Started session-16.scope. Dec 13 01:58:23.856482 sshd[3525]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:23.859142 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:40920.service: Deactivated successfully. Dec 13 01:58:23.859770 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:58:23.860325 systemd-logind[1198]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:58:23.861384 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:40936.service. Dec 13 01:58:23.862120 systemd-logind[1198]: Removed session 16. Dec 13 01:58:23.899098 sshd[3537]: Accepted publickey for core from 10.0.0.1 port 40936 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:23.900113 sshd[3537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:23.903292 systemd-logind[1198]: New session 17 of user core. Dec 13 01:58:23.904082 systemd[1]: Started session-17.scope. Dec 13 01:58:25.253859 sshd[3537]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:25.256240 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:40948.service. Dec 13 01:58:25.257071 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:40936.service: Deactivated successfully. Dec 13 01:58:25.257739 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:58:25.258500 systemd-logind[1198]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:58:25.259579 systemd-logind[1198]: Removed session 17. Dec 13 01:58:25.299040 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 40948 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:25.300158 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:25.303535 systemd-logind[1198]: New session 18 of user core. Dec 13 01:58:25.304305 systemd[1]: Started session-18.scope. Dec 13 01:58:25.519746 sshd[3556]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:25.522987 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:40950.service. Dec 13 01:58:25.523374 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:40948.service: Deactivated successfully. Dec 13 01:58:25.525413 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:58:25.526069 systemd-logind[1198]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:58:25.527200 systemd-logind[1198]: Removed session 18. Dec 13 01:58:25.560578 sshd[3567]: Accepted publickey for core from 10.0.0.1 port 40950 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:25.561676 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:25.564790 systemd-logind[1198]: New session 19 of user core. Dec 13 01:58:25.565709 systemd[1]: Started session-19.scope. Dec 13 01:58:25.663867 sshd[3567]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:25.666390 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:40950.service: Deactivated successfully. Dec 13 01:58:25.667141 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:58:25.667708 systemd-logind[1198]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:58:25.668395 systemd-logind[1198]: Removed session 19. Dec 13 01:58:30.668453 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:44412.service. Dec 13 01:58:30.705362 sshd[3581]: Accepted publickey for core from 10.0.0.1 port 44412 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:30.706805 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:30.710472 systemd-logind[1198]: New session 20 of user core. Dec 13 01:58:30.711197 systemd[1]: Started session-20.scope. Dec 13 01:58:30.806992 sshd[3581]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:30.808956 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:44412.service: Deactivated successfully. Dec 13 01:58:30.809579 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:58:30.810135 systemd-logind[1198]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:58:30.810788 systemd-logind[1198]: Removed session 20. Dec 13 01:58:35.810963 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:44414.service. Dec 13 01:58:35.846094 sshd[3598]: Accepted publickey for core from 10.0.0.1 port 44414 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:35.846983 sshd[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:35.850630 systemd-logind[1198]: New session 21 of user core. Dec 13 01:58:35.851577 systemd[1]: Started session-21.scope. Dec 13 01:58:35.947007 sshd[3598]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:35.949389 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:44414.service: Deactivated successfully. Dec 13 01:58:35.950156 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:58:35.950772 systemd-logind[1198]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:58:35.951415 systemd-logind[1198]: Removed session 21. Dec 13 01:58:40.951033 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:35958.service. Dec 13 01:58:40.986627 sshd[3613]: Accepted publickey for core from 10.0.0.1 port 35958 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:40.987722 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:40.991509 systemd-logind[1198]: New session 22 of user core. Dec 13 01:58:40.992549 systemd[1]: Started session-22.scope. Dec 13 01:58:41.090593 sshd[3613]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:41.092701 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:35958.service: Deactivated successfully. Dec 13 01:58:41.093444 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:58:41.093976 systemd-logind[1198]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:58:41.094653 systemd-logind[1198]: Removed session 22. Dec 13 01:58:43.854365 kubelet[2013]: E1213 01:58:43.854325 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:46.094371 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:47514.service. Dec 13 01:58:46.130477 sshd[3627]: Accepted publickey for core from 10.0.0.1 port 47514 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:46.131421 sshd[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:46.134677 systemd-logind[1198]: New session 23 of user core. Dec 13 01:58:46.135607 systemd[1]: Started session-23.scope. Dec 13 01:58:46.235223 sshd[3627]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:46.239295 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:47516.service. Dec 13 01:58:46.239862 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:47514.service: Deactivated successfully. Dec 13 01:58:46.240542 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:58:46.241172 systemd-logind[1198]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:58:46.242010 systemd-logind[1198]: Removed session 23. Dec 13 01:58:46.275753 sshd[3639]: Accepted publickey for core from 10.0.0.1 port 47516 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:46.276769 sshd[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:46.280250 systemd-logind[1198]: New session 24 of user core. Dec 13 01:58:46.281417 systemd[1]: Started session-24.scope. Dec 13 01:58:47.701302 kubelet[2013]: I1213 01:58:47.701218 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6qz5v" podStartSLOduration=70.701179391 podStartE2EDuration="1m10.701179391s" podCreationTimestamp="2024-12-13 01:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:02.970723765 +0000 UTC m=+42.187257518" watchObservedRunningTime="2024-12-13 01:58:47.701179391 +0000 UTC m=+86.917713134" Dec 13 01:58:47.707476 env[1209]: time="2024-12-13T01:58:47.707421341Z" level=info msg="StopContainer for \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\" with timeout 30 (s)" Dec 13 01:58:47.707874 env[1209]: time="2024-12-13T01:58:47.707782368Z" level=info msg="Stop container \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\" with signal terminated" Dec 13 01:58:47.715918 systemd[1]: run-containerd-runc-k8s.io-47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e-runc.h7exw3.mount: Deactivated successfully. Dec 13 01:58:47.721235 systemd[1]: cri-containerd-de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2.scope: Deactivated successfully. Dec 13 01:58:47.733320 env[1209]: time="2024-12-13T01:58:47.733256250Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:58:47.739989 env[1209]: time="2024-12-13T01:58:47.739936484Z" level=info msg="StopContainer for \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\" with timeout 2 (s)" Dec 13 01:58:47.740341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2-rootfs.mount: Deactivated successfully. Dec 13 01:58:47.740877 env[1209]: time="2024-12-13T01:58:47.740850183Z" level=info msg="Stop container \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\" with signal terminated" Dec 13 01:58:47.746607 systemd-networkd[1032]: lxc_health: Link DOWN Dec 13 01:58:47.746614 systemd-networkd[1032]: lxc_health: Lost carrier Dec 13 01:58:47.752202 env[1209]: time="2024-12-13T01:58:47.752144378Z" level=info msg="shim disconnected" id=de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2 Dec 13 01:58:47.752202 env[1209]: time="2024-12-13T01:58:47.752184725Z" level=warning msg="cleaning up after shim disconnected" id=de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2 namespace=k8s.io Dec 13 01:58:47.752202 env[1209]: time="2024-12-13T01:58:47.752193021Z" level=info msg="cleaning up dead shim" Dec 13 01:58:47.758213 env[1209]: time="2024-12-13T01:58:47.758151100Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3695 runtime=io.containerd.runc.v2\n" Dec 13 01:58:47.760990 env[1209]: time="2024-12-13T01:58:47.760958563Z" level=info msg="StopContainer for \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\" returns successfully" Dec 13 01:58:47.761558 env[1209]: time="2024-12-13T01:58:47.761536503Z" level=info msg="StopPodSandbox for \"0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5\"" Dec 13 01:58:47.761620 env[1209]: time="2024-12-13T01:58:47.761593141Z" level=info msg="Container to stop \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:58:47.763305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5-shm.mount: Deactivated successfully. Dec 13 01:58:47.771980 systemd[1]: cri-containerd-47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e.scope: Deactivated successfully. Dec 13 01:58:47.772209 systemd[1]: cri-containerd-47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e.scope: Consumed 5.762s CPU time. Dec 13 01:58:47.775187 systemd[1]: cri-containerd-0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5.scope: Deactivated successfully. Dec 13 01:58:47.790455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e-rootfs.mount: Deactivated successfully. Dec 13 01:58:47.797801 env[1209]: time="2024-12-13T01:58:47.797760896Z" level=info msg="shim disconnected" id=0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5 Dec 13 01:58:47.798482 env[1209]: time="2024-12-13T01:58:47.798442503Z" level=warning msg="cleaning up after shim disconnected" id=0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5 namespace=k8s.io Dec 13 01:58:47.798482 env[1209]: time="2024-12-13T01:58:47.798460757Z" level=info msg="cleaning up dead shim" Dec 13 01:58:47.798649 env[1209]: time="2024-12-13T01:58:47.798025990Z" level=info msg="shim disconnected" id=47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e Dec 13 01:58:47.798649 env[1209]: time="2024-12-13T01:58:47.798565056Z" level=warning msg="cleaning up after shim disconnected" id=47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e namespace=k8s.io Dec 13 01:58:47.798649 env[1209]: time="2024-12-13T01:58:47.798573191Z" level=info msg="cleaning up dead shim" Dec 13 01:58:47.804310 env[1209]: time="2024-12-13T01:58:47.804264163Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3741 runtime=io.containerd.runc.v2\n" Dec 13 01:58:47.804600 env[1209]: time="2024-12-13T01:58:47.804567710Z" level=info msg="TearDown network for sandbox \"0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5\" successfully" Dec 13 01:58:47.804600 env[1209]: time="2024-12-13T01:58:47.804591365Z" level=info msg="StopPodSandbox for \"0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5\" returns successfully" Dec 13 01:58:47.806092 env[1209]: time="2024-12-13T01:58:47.806035644Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3742 runtime=io.containerd.runc.v2\n" Dec 13 01:58:47.808898 env[1209]: time="2024-12-13T01:58:47.808866332Z" level=info msg="StopContainer for \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\" returns successfully" Dec 13 01:58:47.809285 env[1209]: time="2024-12-13T01:58:47.809254450Z" level=info msg="StopPodSandbox for \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\"" Dec 13 01:58:47.809336 env[1209]: time="2024-12-13T01:58:47.809314154Z" level=info msg="Container to stop \"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:58:47.809336 env[1209]: time="2024-12-13T01:58:47.809328771Z" level=info msg="Container to stop \"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:58:47.809459 env[1209]: time="2024-12-13T01:58:47.809338480Z" level=info msg="Container to stop \"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:58:47.809459 env[1209]: time="2024-12-13T01:58:47.809349792Z" level=info msg="Container to stop \"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:58:47.809459 env[1209]: time="2024-12-13T01:58:47.809359179Z" level=info msg="Container to stop \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:58:47.815079 systemd[1]: cri-containerd-4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a.scope: Deactivated successfully. Dec 13 01:58:47.838962 env[1209]: time="2024-12-13T01:58:47.838914339Z" level=info msg="shim disconnected" id=4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a Dec 13 01:58:47.839179 env[1209]: time="2024-12-13T01:58:47.839143465Z" level=warning msg="cleaning up after shim disconnected" id=4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a namespace=k8s.io Dec 13 01:58:47.839179 env[1209]: time="2024-12-13T01:58:47.839163533Z" level=info msg="cleaning up dead shim" Dec 13 01:58:47.845398 env[1209]: time="2024-12-13T01:58:47.845347032Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3783 runtime=io.containerd.runc.v2\n" Dec 13 01:58:47.845704 env[1209]: time="2024-12-13T01:58:47.845683172Z" level=info msg="TearDown network for sandbox \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" successfully" Dec 13 01:58:47.845769 env[1209]: time="2024-12-13T01:58:47.845704412Z" level=info msg="StopPodSandbox for \"4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a\" returns successfully" Dec 13 01:58:47.921953 kubelet[2013]: I1213 01:58:47.921901 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqfhl\" (UniqueName: \"kubernetes.io/projected/9ccf0f7a-b588-4567-bca8-eace7a988482-kube-api-access-nqfhl\") pod \"9ccf0f7a-b588-4567-bca8-eace7a988482\" (UID: \"9ccf0f7a-b588-4567-bca8-eace7a988482\") " Dec 13 01:58:47.921953 kubelet[2013]: I1213 01:58:47.921949 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-hostproc\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922189 kubelet[2013]: I1213 01:58:47.922003 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-hostproc" (OuterVolumeSpecName: "hostproc") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:47.922189 kubelet[2013]: I1213 01:58:47.922056 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-host-proc-sys-kernel\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922189 kubelet[2013]: I1213 01:58:47.922071 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:47.922189 kubelet[2013]: I1213 01:58:47.922104 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-lib-modules\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922189 kubelet[2013]: I1213 01:58:47.922118 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:47.922351 kubelet[2013]: I1213 01:58:47.922143 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-xtables-lock\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922351 kubelet[2013]: I1213 01:58:47.922156 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:47.922351 kubelet[2013]: I1213 01:58:47.922193 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-etc-cni-netd\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922351 kubelet[2013]: I1213 01:58:47.922218 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ccf0f7a-b588-4567-bca8-eace7a988482-cilium-config-path\") pod \"9ccf0f7a-b588-4567-bca8-eace7a988482\" (UID: \"9ccf0f7a-b588-4567-bca8-eace7a988482\") " Dec 13 01:58:47.922351 kubelet[2013]: I1213 01:58:47.922238 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81aaebab-148a-4727-b718-cc60d72f5b60-hubble-tls\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922351 kubelet[2013]: I1213 01:58:47.922256 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-cgroup\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922569 kubelet[2013]: I1213 01:58:47.922277 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81aaebab-148a-4727-b718-cc60d72f5b60-clustermesh-secrets\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922569 kubelet[2013]: I1213 01:58:47.922300 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cni-path\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922569 kubelet[2013]: I1213 01:58:47.922319 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-bpf-maps\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922569 kubelet[2013]: I1213 01:58:47.922335 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-config-path\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922569 kubelet[2013]: I1213 01:58:47.922349 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxcxb\" (UniqueName: \"kubernetes.io/projected/81aaebab-148a-4727-b718-cc60d72f5b60-kube-api-access-mxcxb\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:47.922569 kubelet[2013]: I1213 01:58:47.922381 2013 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:47.922791 kubelet[2013]: I1213 01:58:47.922391 2013 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:47.922791 kubelet[2013]: I1213 01:58:47.922399 2013 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:47.922791 kubelet[2013]: I1213 01:58:47.922417 2013 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:47.922977 kubelet[2013]: I1213 01:58:47.922938 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:47.924781 kubelet[2013]: I1213 01:58:47.924749 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccf0f7a-b588-4567-bca8-eace7a988482-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9ccf0f7a-b588-4567-bca8-eace7a988482" (UID: "9ccf0f7a-b588-4567-bca8-eace7a988482"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:58:47.924875 kubelet[2013]: I1213 01:58:47.924844 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:47.924875 kubelet[2013]: I1213 01:58:47.924852 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:47.924975 kubelet[2013]: I1213 01:58:47.924882 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cni-path" (OuterVolumeSpecName: "cni-path") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:47.925248 kubelet[2013]: I1213 01:58:47.925220 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81aaebab-148a-4727-b718-cc60d72f5b60-kube-api-access-mxcxb" (OuterVolumeSpecName: "kube-api-access-mxcxb") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "kube-api-access-mxcxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:58:47.925528 kubelet[2013]: I1213 01:58:47.925482 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccf0f7a-b588-4567-bca8-eace7a988482-kube-api-access-nqfhl" (OuterVolumeSpecName: "kube-api-access-nqfhl") pod "9ccf0f7a-b588-4567-bca8-eace7a988482" (UID: "9ccf0f7a-b588-4567-bca8-eace7a988482"). InnerVolumeSpecName "kube-api-access-nqfhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:58:47.926101 kubelet[2013]: I1213 01:58:47.926077 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81aaebab-148a-4727-b718-cc60d72f5b60-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:58:47.926861 kubelet[2013]: I1213 01:58:47.926829 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:58:47.927094 kubelet[2013]: I1213 01:58:47.927055 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81aaebab-148a-4727-b718-cc60d72f5b60-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:58:48.014514 kubelet[2013]: I1213 01:58:48.014347 2013 scope.go:117] "RemoveContainer" containerID="de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2" Dec 13 01:58:48.015562 env[1209]: time="2024-12-13T01:58:48.015477646Z" level=info msg="RemoveContainer for \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\"" Dec 13 01:58:48.018392 systemd[1]: Removed slice kubepods-besteffort-pod9ccf0f7a_b588_4567_bca8_eace7a988482.slice. Dec 13 01:58:48.022554 kubelet[2013]: I1213 01:58:48.022528 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-host-proc-sys-net\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:48.023129 kubelet[2013]: I1213 01:58:48.022562 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-run\") pod \"81aaebab-148a-4727-b718-cc60d72f5b60\" (UID: \"81aaebab-148a-4727-b718-cc60d72f5b60\") " Dec 13 01:58:48.023129 kubelet[2013]: I1213 01:58:48.022592 2013 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.023129 kubelet[2013]: I1213 01:58:48.022602 2013 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ccf0f7a-b588-4567-bca8-eace7a988482-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.023129 kubelet[2013]: I1213 01:58:48.022610 2013 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81aaebab-148a-4727-b718-cc60d72f5b60-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.023129 kubelet[2013]: I1213 01:58:48.022617 2013 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.023129 kubelet[2013]: I1213 01:58:48.022624 2013 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81aaebab-148a-4727-b718-cc60d72f5b60-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.023129 kubelet[2013]: I1213 01:58:48.022659 2013 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.023129 kubelet[2013]: I1213 01:58:48.022666 2013 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.023323 kubelet[2013]: I1213 01:58:48.022673 2013 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.023323 kubelet[2013]: I1213 01:58:48.022702 2013 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mxcxb\" (UniqueName: \"kubernetes.io/projected/81aaebab-148a-4727-b718-cc60d72f5b60-kube-api-access-mxcxb\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.023323 kubelet[2013]: I1213 01:58:48.022727 2013 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nqfhl\" (UniqueName: \"kubernetes.io/projected/9ccf0f7a-b588-4567-bca8-eace7a988482-kube-api-access-nqfhl\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.023323 kubelet[2013]: I1213 01:58:48.022754 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:48.023323 kubelet[2013]: I1213 01:58:48.022775 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "81aaebab-148a-4727-b718-cc60d72f5b60" (UID: "81aaebab-148a-4727-b718-cc60d72f5b60"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:48.083560 env[1209]: time="2024-12-13T01:58:48.083507126Z" level=info msg="RemoveContainer for \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\" returns successfully" Dec 13 01:58:48.083870 kubelet[2013]: I1213 01:58:48.083849 2013 scope.go:117] "RemoveContainer" containerID="de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2" Dec 13 01:58:48.084219 env[1209]: time="2024-12-13T01:58:48.084140070Z" level=error msg="ContainerStatus for \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\": not found" Dec 13 01:58:48.084333 kubelet[2013]: E1213 01:58:48.084313 2013 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\": not found" containerID="de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2" Dec 13 01:58:48.084435 kubelet[2013]: I1213 01:58:48.084339 2013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2"} err="failed to get container status \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\": rpc error: code = NotFound desc = an error occurred when try to find container \"de8c57a5153de815d0f9cdbcca416ca8def60fb44030e26c0f6d91eb7806ead2\": not found" Dec 13 01:58:48.084472 kubelet[2013]: I1213 01:58:48.084438 2013 scope.go:117] "RemoveContainer" containerID="47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e" Dec 13 01:58:48.085434 env[1209]: time="2024-12-13T01:58:48.085393956Z" level=info msg="RemoveContainer for \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\"" Dec 13 01:58:48.122902 kubelet[2013]: I1213 01:58:48.122869 2013 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.122902 kubelet[2013]: I1213 01:58:48.122894 2013 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81aaebab-148a-4727-b718-cc60d72f5b60-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:48.154801 env[1209]: time="2024-12-13T01:58:48.154763505Z" level=info msg="RemoveContainer for \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\" returns successfully" Dec 13 01:58:48.154972 kubelet[2013]: I1213 01:58:48.154948 2013 scope.go:117] "RemoveContainer" containerID="cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235" Dec 13 01:58:48.156009 env[1209]: time="2024-12-13T01:58:48.155971474Z" level=info msg="RemoveContainer for \"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235\"" Dec 13 01:58:48.182002 env[1209]: time="2024-12-13T01:58:48.181961461Z" level=info msg="RemoveContainer for \"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235\" returns successfully" Dec 13 01:58:48.182201 kubelet[2013]: I1213 01:58:48.182176 2013 scope.go:117] "RemoveContainer" containerID="edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd" Dec 13 01:58:48.183168 env[1209]: time="2024-12-13T01:58:48.183147007Z" level=info msg="RemoveContainer for \"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd\"" Dec 13 01:58:48.196532 env[1209]: time="2024-12-13T01:58:48.196483381Z" level=info msg="RemoveContainer for \"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd\" returns successfully" Dec 13 01:58:48.196630 kubelet[2013]: I1213 01:58:48.196607 2013 scope.go:117] "RemoveContainer" containerID="d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586" Dec 13 01:58:48.197525 env[1209]: time="2024-12-13T01:58:48.197501447Z" level=info msg="RemoveContainer for \"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586\"" Dec 13 01:58:48.228039 env[1209]: time="2024-12-13T01:58:48.228009370Z" level=info msg="RemoveContainer for \"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586\" returns successfully" Dec 13 01:58:48.228193 kubelet[2013]: I1213 01:58:48.228160 2013 scope.go:117] "RemoveContainer" containerID="11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040" Dec 13 01:58:48.228947 env[1209]: time="2024-12-13T01:58:48.228924531Z" level=info msg="RemoveContainer for \"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040\"" Dec 13 01:58:48.231966 env[1209]: time="2024-12-13T01:58:48.231948264Z" level=info msg="RemoveContainer for \"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040\" returns successfully" Dec 13 01:58:48.232074 kubelet[2013]: I1213 01:58:48.232054 2013 scope.go:117] "RemoveContainer" containerID="47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e" Dec 13 01:58:48.232222 env[1209]: time="2024-12-13T01:58:48.232176509Z" level=error msg="ContainerStatus for \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\": not found" Dec 13 01:58:48.232339 kubelet[2013]: E1213 01:58:48.232307 2013 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\": not found" containerID="47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e" Dec 13 01:58:48.232339 kubelet[2013]: I1213 01:58:48.232332 2013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e"} err="failed to get container status \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\": rpc error: code = NotFound desc = an error occurred when try to find container \"47c40bfa6646f34f37525f054255b9a98a82696a46f3f33a0a6c4b26db7f137e\": not found" Dec 13 01:58:48.232339 kubelet[2013]: I1213 01:58:48.232355 2013 scope.go:117] "RemoveContainer" containerID="cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235" Dec 13 01:58:48.232604 env[1209]: time="2024-12-13T01:58:48.232494043Z" level=error msg="ContainerStatus for \"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235\": not found" Dec 13 01:58:48.232740 kubelet[2013]: E1213 01:58:48.232700 2013 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235\": not found" containerID="cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235" Dec 13 01:58:48.232740 kubelet[2013]: I1213 01:58:48.232719 2013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235"} err="failed to get container status \"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb2f7a59213465b594c2d0a742631abb4aa8427f5a2c5c285bc87e15959b0235\": not found" Dec 13 01:58:48.232740 kubelet[2013]: I1213 01:58:48.232730 2013 scope.go:117] "RemoveContainer" containerID="edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd" Dec 13 01:58:48.232870 env[1209]: time="2024-12-13T01:58:48.232835372Z" level=error msg="ContainerStatus for \"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd\": not found" Dec 13 01:58:48.232939 kubelet[2013]: E1213 01:58:48.232918 2013 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd\": not found" containerID="edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd" Dec 13 01:58:48.232939 kubelet[2013]: I1213 01:58:48.232935 2013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd"} err="failed to get container status \"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"edc30907956d0ea58579c2f6d1006e4aaa4df5ad46e705f059af6b7c051441cd\": not found" Dec 13 01:58:48.233029 kubelet[2013]: I1213 01:58:48.232945 2013 scope.go:117] "RemoveContainer" containerID="d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586" Dec 13 01:58:48.233089 env[1209]: time="2024-12-13T01:58:48.233056703Z" level=error msg="ContainerStatus for \"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586\": not found" Dec 13 01:58:48.233156 kubelet[2013]: E1213 01:58:48.233139 2013 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586\": not found" containerID="d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586" Dec 13 01:58:48.233192 kubelet[2013]: I1213 01:58:48.233156 2013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586"} err="failed to get container status \"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586\": rpc error: code = NotFound desc = an error occurred when try to find container \"d02bd9f6d3f6c4e9cabbe8909527aace9b60f6299026730fc0f0b98de3021586\": not found" Dec 13 01:58:48.233192 kubelet[2013]: I1213 01:58:48.233168 2013 scope.go:117] "RemoveContainer" containerID="11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040" Dec 13 01:58:48.233304 env[1209]: time="2024-12-13T01:58:48.233269258Z" level=error msg="ContainerStatus for \"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040\": not found" Dec 13 01:58:48.233377 kubelet[2013]: E1213 01:58:48.233351 2013 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040\": not found" containerID="11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040" Dec 13 01:58:48.233377 kubelet[2013]: I1213 01:58:48.233366 2013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040"} err="failed to get container status \"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040\": rpc error: code = NotFound desc = an error occurred when try to find container \"11f2ff61a55063ab1eb43b7d0e724eb0989a40b6e5182c3d77d0b05571b17040\": not found" Dec 13 01:58:48.320959 systemd[1]: Removed slice kubepods-burstable-pod81aaebab_148a_4727_b718_cc60d72f5b60.slice. Dec 13 01:58:48.321037 systemd[1]: kubepods-burstable-pod81aaebab_148a_4727_b718_cc60d72f5b60.slice: Consumed 5.846s CPU time. Dec 13 01:58:48.713248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a-rootfs.mount: Deactivated successfully. Dec 13 01:58:48.713336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ce6ff00e51d72f0eea883c11e2d8bdd69a0dcf6138db7a3301509e723956d2a-shm.mount: Deactivated successfully. Dec 13 01:58:48.713385 systemd[1]: var-lib-kubelet-pods-81aaebab\x2d148a\x2d4727\x2db718\x2dcc60d72f5b60-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:58:48.713441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f71f3ef58a9d3273f9662fe66f8f403e69623153034cb6e05dd3982562417f5-rootfs.mount: Deactivated successfully. Dec 13 01:58:48.713489 systemd[1]: var-lib-kubelet-pods-81aaebab\x2d148a\x2d4727\x2db718\x2dcc60d72f5b60-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:58:48.713539 systemd[1]: var-lib-kubelet-pods-9ccf0f7a\x2db588\x2d4567\x2dbca8\x2deace7a988482-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnqfhl.mount: Deactivated successfully. Dec 13 01:58:48.713598 systemd[1]: var-lib-kubelet-pods-81aaebab\x2d148a\x2d4727\x2db718\x2dcc60d72f5b60-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmxcxb.mount: Deactivated successfully. Dec 13 01:58:48.855522 kubelet[2013]: I1213 01:58:48.855479 2013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81aaebab-148a-4727-b718-cc60d72f5b60" path="/var/lib/kubelet/pods/81aaebab-148a-4727-b718-cc60d72f5b60/volumes" Dec 13 01:58:48.856115 kubelet[2013]: I1213 01:58:48.856088 2013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ccf0f7a-b588-4567-bca8-eace7a988482" path="/var/lib/kubelet/pods/9ccf0f7a-b588-4567-bca8-eace7a988482/volumes" Dec 13 01:58:49.679355 sshd[3639]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:49.682684 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:47516.service: Deactivated successfully. Dec 13 01:58:49.683357 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:58:49.683968 systemd-logind[1198]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:58:49.685381 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:47520.service. Dec 13 01:58:49.686441 systemd-logind[1198]: Removed session 24. Dec 13 01:58:49.724084 sshd[3801]: Accepted publickey for core from 10.0.0.1 port 47520 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:49.725194 sshd[3801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:49.728526 systemd-logind[1198]: New session 25 of user core. Dec 13 01:58:49.729256 systemd[1]: Started session-25.scope. Dec 13 01:58:50.109941 sshd[3801]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:50.111259 systemd[1]: Started sshd@25-10.0.0.117:22-10.0.0.1:47536.service. Dec 13 01:58:50.113716 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:47520.service: Deactivated successfully. Dec 13 01:58:50.114392 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:58:50.115036 systemd-logind[1198]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:58:50.116057 systemd-logind[1198]: Removed session 25. Dec 13 01:58:50.124953 kubelet[2013]: I1213 01:58:50.124906 2013 topology_manager.go:215] "Topology Admit Handler" podUID="ffeb7a40-24f1-4435-9631-509399cca0c7" podNamespace="kube-system" podName="cilium-8rdz5" Dec 13 01:58:50.124953 kubelet[2013]: E1213 01:58:50.124959 2013 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81aaebab-148a-4727-b718-cc60d72f5b60" containerName="mount-cgroup" Dec 13 01:58:50.124953 kubelet[2013]: E1213 01:58:50.124967 2013 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81aaebab-148a-4727-b718-cc60d72f5b60" containerName="clean-cilium-state" Dec 13 01:58:50.125434 kubelet[2013]: E1213 01:58:50.124973 2013 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81aaebab-148a-4727-b718-cc60d72f5b60" containerName="cilium-agent" Dec 13 01:58:50.125434 kubelet[2013]: E1213 01:58:50.124979 2013 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ccf0f7a-b588-4567-bca8-eace7a988482" containerName="cilium-operator" Dec 13 01:58:50.125434 kubelet[2013]: E1213 01:58:50.124985 2013 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81aaebab-148a-4727-b718-cc60d72f5b60" containerName="apply-sysctl-overwrites" Dec 13 01:58:50.125434 kubelet[2013]: E1213 01:58:50.124990 2013 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81aaebab-148a-4727-b718-cc60d72f5b60" containerName="mount-bpf-fs" Dec 13 01:58:50.125434 kubelet[2013]: I1213 01:58:50.125013 2013 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccf0f7a-b588-4567-bca8-eace7a988482" containerName="cilium-operator" Dec 13 01:58:50.125434 kubelet[2013]: I1213 01:58:50.125018 2013 memory_manager.go:354] "RemoveStaleState removing state" podUID="81aaebab-148a-4727-b718-cc60d72f5b60" containerName="cilium-agent" Dec 13 01:58:50.130469 systemd[1]: Created slice kubepods-burstable-podffeb7a40_24f1_4435_9631_509399cca0c7.slice. Dec 13 01:58:50.131261 kubelet[2013]: W1213 01:58:50.131230 2013 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:58:50.131423 kubelet[2013]: W1213 01:58:50.131358 2013 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:58:50.131550 kubelet[2013]: E1213 01:58:50.131517 2013 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:58:50.131550 kubelet[2013]: E1213 01:58:50.131373 2013 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:58:50.131550 kubelet[2013]: W1213 01:58:50.131425 2013 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:58:50.131550 kubelet[2013]: E1213 01:58:50.131554 2013 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:58:50.131550 kubelet[2013]: W1213 01:58:50.131464 2013 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:58:50.131816 kubelet[2013]: E1213 01:58:50.131567 2013 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:58:50.135880 kubelet[2013]: I1213 01:58:50.135837 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-xtables-lock\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.135880 kubelet[2013]: I1213 01:58:50.135871 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-host-proc-sys-net\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.135958 kubelet[2013]: I1213 01:58:50.135886 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-host-proc-sys-kernel\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.135958 kubelet[2013]: I1213 01:58:50.135900 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-cgroup\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.135958 kubelet[2013]: I1213 01:58:50.135913 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-config-path\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.135958 kubelet[2013]: I1213 01:58:50.135926 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-hostproc\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.135958 kubelet[2013]: I1213 01:58:50.135938 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-run\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.135958 kubelet[2013]: I1213 01:58:50.135950 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffeb7a40-24f1-4435-9631-509399cca0c7-hubble-tls\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.136091 kubelet[2013]: I1213 01:58:50.135963 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-bpf-maps\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.136091 kubelet[2013]: I1213 01:58:50.135975 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffeb7a40-24f1-4435-9631-509399cca0c7-clustermesh-secrets\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.136091 kubelet[2013]: I1213 01:58:50.135988 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhqz8\" (UniqueName: \"kubernetes.io/projected/ffeb7a40-24f1-4435-9631-509399cca0c7-kube-api-access-hhqz8\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.136091 kubelet[2013]: I1213 01:58:50.136005 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cni-path\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.136091 kubelet[2013]: I1213 01:58:50.136018 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-lib-modules\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.136091 kubelet[2013]: I1213 01:58:50.136030 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-etc-cni-netd\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.136222 kubelet[2013]: I1213 01:58:50.136043 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-ipsec-secrets\") pod \"cilium-8rdz5\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " pod="kube-system/cilium-8rdz5" Dec 13 01:58:50.152052 sshd[3811]: Accepted publickey for core from 10.0.0.1 port 47536 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:50.153504 sshd[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:50.158207 systemd[1]: Started session-26.scope. Dec 13 01:58:50.158682 systemd-logind[1198]: New session 26 of user core. Dec 13 01:58:50.275553 sshd[3811]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:50.278149 systemd[1]: sshd@25-10.0.0.117:22-10.0.0.1:47536.service: Deactivated successfully. Dec 13 01:58:50.278884 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:58:50.280778 systemd-logind[1198]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:58:50.281548 systemd[1]: Started sshd@26-10.0.0.117:22-10.0.0.1:47546.service. Dec 13 01:58:50.282332 systemd-logind[1198]: Removed session 26. Dec 13 01:58:50.286472 kubelet[2013]: E1213 01:58:50.286433 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-8rdz5" podUID="ffeb7a40-24f1-4435-9631-509399cca0c7" Dec 13 01:58:50.317936 sshd[3826]: Accepted publickey for core from 10.0.0.1 port 47546 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:58:50.319067 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:50.322647 systemd-logind[1198]: New session 27 of user core. Dec 13 01:58:50.323349 systemd[1]: Started session-27.scope. Dec 13 01:58:50.853873 kubelet[2013]: E1213 01:58:50.853843 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:50.897958 kubelet[2013]: E1213 01:58:50.897927 2013 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:58:51.141713 kubelet[2013]: I1213 01:58:51.141362 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-host-proc-sys-kernel\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.141713 kubelet[2013]: I1213 01:58:51.141395 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cni-path\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.141713 kubelet[2013]: I1213 01:58:51.141421 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-host-proc-sys-net\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.141713 kubelet[2013]: I1213 01:58:51.141438 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-bpf-maps\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.141713 kubelet[2013]: I1213 01:58:51.141451 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-xtables-lock\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.141713 kubelet[2013]: I1213 01:58:51.141463 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-etc-cni-netd\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.142109 kubelet[2013]: I1213 01:58:51.141477 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-cgroup\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.142109 kubelet[2013]: I1213 01:58:51.141489 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-hostproc\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.142109 kubelet[2013]: I1213 01:58:51.141497 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:51.142109 kubelet[2013]: I1213 01:58:51.141534 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cni-path" (OuterVolumeSpecName: "cni-path") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:51.142109 kubelet[2013]: I1213 01:58:51.141513 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-config-path\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.142224 kubelet[2013]: I1213 01:58:51.141509 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:51.142224 kubelet[2013]: I1213 01:58:51.141577 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:51.142224 kubelet[2013]: I1213 01:58:51.141592 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:51.142224 kubelet[2013]: I1213 01:58:51.141593 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-ipsec-secrets\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.142224 kubelet[2013]: I1213 01:58:51.141605 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:51.142331 kubelet[2013]: I1213 01:58:51.141613 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-run\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.142331 kubelet[2013]: I1213 01:58:51.141623 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:51.142331 kubelet[2013]: I1213 01:58:51.141657 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhqz8\" (UniqueName: \"kubernetes.io/projected/ffeb7a40-24f1-4435-9631-509399cca0c7-kube-api-access-hhqz8\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.142331 kubelet[2013]: I1213 01:58:51.141672 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-lib-modules\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.142331 kubelet[2013]: I1213 01:58:51.141718 2013 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.142331 kubelet[2013]: I1213 01:58:51.141727 2013 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.142331 kubelet[2013]: I1213 01:58:51.141734 2013 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.142483 kubelet[2013]: I1213 01:58:51.141744 2013 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.142483 kubelet[2013]: I1213 01:58:51.141752 2013 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.142483 kubelet[2013]: I1213 01:58:51.141759 2013 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.142483 kubelet[2013]: I1213 01:58:51.141766 2013 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.142483 kubelet[2013]: I1213 01:58:51.141783 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:51.142483 kubelet[2013]: I1213 01:58:51.141798 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:51.142654 kubelet[2013]: I1213 01:58:51.141950 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-hostproc" (OuterVolumeSpecName: "hostproc") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:58:51.143105 kubelet[2013]: I1213 01:58:51.143076 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:58:51.144271 kubelet[2013]: I1213 01:58:51.144228 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:58:51.144648 kubelet[2013]: I1213 01:58:51.144599 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffeb7a40-24f1-4435-9631-509399cca0c7-kube-api-access-hhqz8" (OuterVolumeSpecName: "kube-api-access-hhqz8") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "kube-api-access-hhqz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:58:51.145336 systemd[1]: var-lib-kubelet-pods-ffeb7a40\x2d24f1\x2d4435\x2d9631\x2d509399cca0c7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 01:58:51.145427 systemd[1]: var-lib-kubelet-pods-ffeb7a40\x2d24f1\x2d4435\x2d9631\x2d509399cca0c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhhqz8.mount: Deactivated successfully. Dec 13 01:58:51.238257 kubelet[2013]: E1213 01:58:51.238211 2013 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 01:58:51.238257 kubelet[2013]: E1213 01:58:51.238244 2013 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-8rdz5: failed to sync secret cache: timed out waiting for the condition Dec 13 01:58:51.238444 kubelet[2013]: E1213 01:58:51.238301 2013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ffeb7a40-24f1-4435-9631-509399cca0c7-hubble-tls podName:ffeb7a40-24f1-4435-9631-509399cca0c7 nodeName:}" failed. No retries permitted until 2024-12-13 01:58:51.738284277 +0000 UTC m=+90.954818020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/ffeb7a40-24f1-4435-9631-509399cca0c7-hubble-tls") pod "cilium-8rdz5" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7") : failed to sync secret cache: timed out waiting for the condition Dec 13 01:58:51.241885 kubelet[2013]: I1213 01:58:51.241854 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffeb7a40-24f1-4435-9631-509399cca0c7-clustermesh-secrets\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.241948 kubelet[2013]: I1213 01:58:51.241916 2013 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.241948 kubelet[2013]: I1213 01:58:51.241927 2013 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.241948 kubelet[2013]: I1213 01:58:51.241935 2013 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.241948 kubelet[2013]: I1213 01:58:51.241942 2013 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ffeb7a40-24f1-4435-9631-509399cca0c7-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.241948 kubelet[2013]: I1213 01:58:51.241949 2013 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hhqz8\" (UniqueName: \"kubernetes.io/projected/ffeb7a40-24f1-4435-9631-509399cca0c7-kube-api-access-hhqz8\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.242066 kubelet[2013]: I1213 01:58:51.241958 2013 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffeb7a40-24f1-4435-9631-509399cca0c7-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.243942 kubelet[2013]: I1213 01:58:51.243910 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffeb7a40-24f1-4435-9631-509399cca0c7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:58:51.245600 systemd[1]: var-lib-kubelet-pods-ffeb7a40\x2d24f1\x2d4435\x2d9631\x2d509399cca0c7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:58:51.342722 kubelet[2013]: I1213 01:58:51.342683 2013 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffeb7a40-24f1-4435-9631-509399cca0c7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:51.945547 kubelet[2013]: I1213 01:58:51.945495 2013 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffeb7a40-24f1-4435-9631-509399cca0c7-hubble-tls\") pod \"ffeb7a40-24f1-4435-9631-509399cca0c7\" (UID: \"ffeb7a40-24f1-4435-9631-509399cca0c7\") " Dec 13 01:58:51.947779 kubelet[2013]: I1213 01:58:51.947744 2013 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffeb7a40-24f1-4435-9631-509399cca0c7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ffeb7a40-24f1-4435-9631-509399cca0c7" (UID: "ffeb7a40-24f1-4435-9631-509399cca0c7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:58:51.948978 systemd[1]: var-lib-kubelet-pods-ffeb7a40\x2d24f1\x2d4435\x2d9631\x2d509399cca0c7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:58:52.028676 systemd[1]: Removed slice kubepods-burstable-podffeb7a40_24f1_4435_9631_509399cca0c7.slice. Dec 13 01:58:52.046165 kubelet[2013]: I1213 01:58:52.046122 2013 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffeb7a40-24f1-4435-9631-509399cca0c7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 01:58:52.317267 kubelet[2013]: I1213 01:58:52.317120 2013 topology_manager.go:215] "Topology Admit Handler" podUID="a8586e1f-73bc-47c0-acf8-457556379b0d" podNamespace="kube-system" podName="cilium-shbj4" Dec 13 01:58:52.324899 systemd[1]: Created slice kubepods-burstable-poda8586e1f_73bc_47c0_acf8_457556379b0d.slice. Dec 13 01:58:52.448378 kubelet[2013]: I1213 01:58:52.448336 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8586e1f-73bc-47c0-acf8-457556379b0d-clustermesh-secrets\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448378 kubelet[2013]: I1213 01:58:52.448374 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8586e1f-73bc-47c0-acf8-457556379b0d-cilium-run\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448378 kubelet[2013]: I1213 01:58:52.448387 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8586e1f-73bc-47c0-acf8-457556379b0d-lib-modules\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448378 kubelet[2013]: I1213 01:58:52.448407 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8586e1f-73bc-47c0-acf8-457556379b0d-host-proc-sys-net\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448717 kubelet[2013]: I1213 01:58:52.448420 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8586e1f-73bc-47c0-acf8-457556379b0d-hubble-tls\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448717 kubelet[2013]: I1213 01:58:52.448433 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8586e1f-73bc-47c0-acf8-457556379b0d-hostproc\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448717 kubelet[2013]: I1213 01:58:52.448444 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8586e1f-73bc-47c0-acf8-457556379b0d-etc-cni-netd\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448717 kubelet[2013]: I1213 01:58:52.448458 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8586e1f-73bc-47c0-acf8-457556379b0d-bpf-maps\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448717 kubelet[2013]: I1213 01:58:52.448471 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8586e1f-73bc-47c0-acf8-457556379b0d-host-proc-sys-kernel\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448717 kubelet[2013]: I1213 01:58:52.448516 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv25d\" (UniqueName: \"kubernetes.io/projected/a8586e1f-73bc-47c0-acf8-457556379b0d-kube-api-access-sv25d\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448869 kubelet[2013]: I1213 01:58:52.448556 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8586e1f-73bc-47c0-acf8-457556379b0d-cilium-ipsec-secrets\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448869 kubelet[2013]: I1213 01:58:52.448575 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8586e1f-73bc-47c0-acf8-457556379b0d-xtables-lock\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448869 kubelet[2013]: I1213 01:58:52.448605 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8586e1f-73bc-47c0-acf8-457556379b0d-cni-path\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448869 kubelet[2013]: I1213 01:58:52.448659 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8586e1f-73bc-47c0-acf8-457556379b0d-cilium-config-path\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.448869 kubelet[2013]: I1213 01:58:52.448693 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8586e1f-73bc-47c0-acf8-457556379b0d-cilium-cgroup\") pod \"cilium-shbj4\" (UID: \"a8586e1f-73bc-47c0-acf8-457556379b0d\") " pod="kube-system/cilium-shbj4" Dec 13 01:58:52.628923 kubelet[2013]: E1213 01:58:52.628807 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:52.629916 env[1209]: time="2024-12-13T01:58:52.629861035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-shbj4,Uid:a8586e1f-73bc-47c0-acf8-457556379b0d,Namespace:kube-system,Attempt:0,}" Dec 13 01:58:52.650376 env[1209]: time="2024-12-13T01:58:52.650295785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:52.650376 env[1209]: time="2024-12-13T01:58:52.650356451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:52.650376 env[1209]: time="2024-12-13T01:58:52.650370186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:52.650579 env[1209]: time="2024-12-13T01:58:52.650553155Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a pid=3854 runtime=io.containerd.runc.v2 Dec 13 01:58:52.661663 systemd[1]: Started cri-containerd-9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a.scope. Dec 13 01:58:52.679457 env[1209]: time="2024-12-13T01:58:52.679420187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-shbj4,Uid:a8586e1f-73bc-47c0-acf8-457556379b0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a\"" Dec 13 01:58:52.680508 kubelet[2013]: E1213 01:58:52.680257 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:52.683093 env[1209]: time="2024-12-13T01:58:52.683058039Z" level=info msg="CreateContainer within sandbox \"9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:58:52.695224 env[1209]: time="2024-12-13T01:58:52.695187708Z" level=info msg="CreateContainer within sandbox \"9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb21bdd90266524993e3124e519825dd0fe7a723b2a56927eb286a1b83075ee0\"" Dec 13 01:58:52.695684 env[1209]: time="2024-12-13T01:58:52.695665706Z" level=info msg="StartContainer for \"eb21bdd90266524993e3124e519825dd0fe7a723b2a56927eb286a1b83075ee0\"" Dec 13 01:58:52.707386 systemd[1]: Started cri-containerd-eb21bdd90266524993e3124e519825dd0fe7a723b2a56927eb286a1b83075ee0.scope. Dec 13 01:58:52.728775 env[1209]: time="2024-12-13T01:58:52.728732699Z" level=info msg="StartContainer for \"eb21bdd90266524993e3124e519825dd0fe7a723b2a56927eb286a1b83075ee0\" returns successfully" Dec 13 01:58:52.734727 systemd[1]: cri-containerd-eb21bdd90266524993e3124e519825dd0fe7a723b2a56927eb286a1b83075ee0.scope: Deactivated successfully. Dec 13 01:58:52.760016 env[1209]: time="2024-12-13T01:58:52.759959676Z" level=info msg="shim disconnected" id=eb21bdd90266524993e3124e519825dd0fe7a723b2a56927eb286a1b83075ee0 Dec 13 01:58:52.760016 env[1209]: time="2024-12-13T01:58:52.760008458Z" level=warning msg="cleaning up after shim disconnected" id=eb21bdd90266524993e3124e519825dd0fe7a723b2a56927eb286a1b83075ee0 namespace=k8s.io Dec 13 01:58:52.760016 env[1209]: time="2024-12-13T01:58:52.760019670Z" level=info msg="cleaning up dead shim" Dec 13 01:58:52.766817 env[1209]: time="2024-12-13T01:58:52.766758595Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3937 runtime=io.containerd.runc.v2\n" Dec 13 01:58:52.855811 kubelet[2013]: I1213 01:58:52.855779 2013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffeb7a40-24f1-4435-9631-509399cca0c7" path="/var/lib/kubelet/pods/ffeb7a40-24f1-4435-9631-509399cca0c7/volumes" Dec 13 01:58:53.027391 kubelet[2013]: E1213 01:58:53.027371 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:53.029784 env[1209]: time="2024-12-13T01:58:53.029744249Z" level=info msg="CreateContainer within sandbox \"9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:58:53.041061 env[1209]: time="2024-12-13T01:58:53.040971407Z" level=info msg="CreateContainer within sandbox \"9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0f5a600a131abd5794f08b8c2ce794ad0a6f331ff3f2805d6cbb0a97f8311182\"" Dec 13 01:58:53.042267 env[1209]: time="2024-12-13T01:58:53.042222624Z" level=info msg="StartContainer for \"0f5a600a131abd5794f08b8c2ce794ad0a6f331ff3f2805d6cbb0a97f8311182\"" Dec 13 01:58:53.058413 systemd[1]: Started cri-containerd-0f5a600a131abd5794f08b8c2ce794ad0a6f331ff3f2805d6cbb0a97f8311182.scope. Dec 13 01:58:53.079657 env[1209]: time="2024-12-13T01:58:53.079590179Z" level=info msg="StartContainer for \"0f5a600a131abd5794f08b8c2ce794ad0a6f331ff3f2805d6cbb0a97f8311182\" returns successfully" Dec 13 01:58:53.082717 systemd[1]: cri-containerd-0f5a600a131abd5794f08b8c2ce794ad0a6f331ff3f2805d6cbb0a97f8311182.scope: Deactivated successfully. Dec 13 01:58:53.101502 env[1209]: time="2024-12-13T01:58:53.101443952Z" level=info msg="shim disconnected" id=0f5a600a131abd5794f08b8c2ce794ad0a6f331ff3f2805d6cbb0a97f8311182 Dec 13 01:58:53.101502 env[1209]: time="2024-12-13T01:58:53.101488727Z" level=warning msg="cleaning up after shim disconnected" id=0f5a600a131abd5794f08b8c2ce794ad0a6f331ff3f2805d6cbb0a97f8311182 namespace=k8s.io Dec 13 01:58:53.101502 env[1209]: time="2024-12-13T01:58:53.101500711Z" level=info msg="cleaning up dead shim" Dec 13 01:58:53.107674 env[1209]: time="2024-12-13T01:58:53.107622890Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3998 runtime=io.containerd.runc.v2\n" Dec 13 01:58:53.253699 kubelet[2013]: I1213 01:58:53.253617 2013 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:58:53Z","lastTransitionTime":"2024-12-13T01:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:58:54.030497 kubelet[2013]: E1213 01:58:54.030459 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:54.032077 env[1209]: time="2024-12-13T01:58:54.032041678Z" level=info msg="CreateContainer within sandbox \"9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:58:54.195279 env[1209]: time="2024-12-13T01:58:54.195221873Z" level=info msg="CreateContainer within sandbox \"9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7815382fe51c3c2e3e2be1efe3c33879c765740bf8ae34bcd1741e27130f3eb3\"" Dec 13 01:58:54.196026 env[1209]: time="2024-12-13T01:58:54.195975474Z" level=info msg="StartContainer for \"7815382fe51c3c2e3e2be1efe3c33879c765740bf8ae34bcd1741e27130f3eb3\"" Dec 13 01:58:54.216269 systemd[1]: Started cri-containerd-7815382fe51c3c2e3e2be1efe3c33879c765740bf8ae34bcd1741e27130f3eb3.scope. Dec 13 01:58:54.242570 env[1209]: time="2024-12-13T01:58:54.241547831Z" level=info msg="StartContainer for \"7815382fe51c3c2e3e2be1efe3c33879c765740bf8ae34bcd1741e27130f3eb3\" returns successfully" Dec 13 01:58:54.243227 systemd[1]: cri-containerd-7815382fe51c3c2e3e2be1efe3c33879c765740bf8ae34bcd1741e27130f3eb3.scope: Deactivated successfully. Dec 13 01:58:54.265840 env[1209]: time="2024-12-13T01:58:54.265793125Z" level=info msg="shim disconnected" id=7815382fe51c3c2e3e2be1efe3c33879c765740bf8ae34bcd1741e27130f3eb3 Dec 13 01:58:54.265840 env[1209]: time="2024-12-13T01:58:54.265838812Z" level=warning msg="cleaning up after shim disconnected" id=7815382fe51c3c2e3e2be1efe3c33879c765740bf8ae34bcd1741e27130f3eb3 namespace=k8s.io Dec 13 01:58:54.265840 env[1209]: time="2024-12-13T01:58:54.265848029Z" level=info msg="cleaning up dead shim" Dec 13 01:58:54.271576 env[1209]: time="2024-12-13T01:58:54.271547242Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4054 runtime=io.containerd.runc.v2\n" Dec 13 01:58:54.554252 systemd[1]: run-containerd-runc-k8s.io-7815382fe51c3c2e3e2be1efe3c33879c765740bf8ae34bcd1741e27130f3eb3-runc.cLodMp.mount: Deactivated successfully. Dec 13 01:58:54.554345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7815382fe51c3c2e3e2be1efe3c33879c765740bf8ae34bcd1741e27130f3eb3-rootfs.mount: Deactivated successfully. Dec 13 01:58:55.034407 kubelet[2013]: E1213 01:58:55.034361 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:55.036050 env[1209]: time="2024-12-13T01:58:55.036007961Z" level=info msg="CreateContainer within sandbox \"9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:58:55.049674 env[1209]: time="2024-12-13T01:58:55.049608644Z" level=info msg="CreateContainer within sandbox \"9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b6163cdf493bbc2485625d9d961bb6c2fc9581356a8d36f0031bf74f9688909a\"" Dec 13 01:58:55.050949 env[1209]: time="2024-12-13T01:58:55.050909053Z" level=info msg="StartContainer for \"b6163cdf493bbc2485625d9d961bb6c2fc9581356a8d36f0031bf74f9688909a\"" Dec 13 01:58:55.066255 systemd[1]: Started cri-containerd-b6163cdf493bbc2485625d9d961bb6c2fc9581356a8d36f0031bf74f9688909a.scope. Dec 13 01:58:55.087880 systemd[1]: cri-containerd-b6163cdf493bbc2485625d9d961bb6c2fc9581356a8d36f0031bf74f9688909a.scope: Deactivated successfully. Dec 13 01:58:55.088712 env[1209]: time="2024-12-13T01:58:55.088658954Z" level=info msg="StartContainer for \"b6163cdf493bbc2485625d9d961bb6c2fc9581356a8d36f0031bf74f9688909a\" returns successfully" Dec 13 01:58:55.105925 env[1209]: time="2024-12-13T01:58:55.105884838Z" level=info msg="shim disconnected" id=b6163cdf493bbc2485625d9d961bb6c2fc9581356a8d36f0031bf74f9688909a Dec 13 01:58:55.105925 env[1209]: time="2024-12-13T01:58:55.105922230Z" level=warning msg="cleaning up after shim disconnected" id=b6163cdf493bbc2485625d9d961bb6c2fc9581356a8d36f0031bf74f9688909a namespace=k8s.io Dec 13 01:58:55.106134 env[1209]: time="2024-12-13T01:58:55.105930165Z" level=info msg="cleaning up dead shim" Dec 13 01:58:55.112220 env[1209]: time="2024-12-13T01:58:55.112177505Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4107 runtime=io.containerd.runc.v2\n" Dec 13 01:58:55.554529 systemd[1]: run-containerd-runc-k8s.io-b6163cdf493bbc2485625d9d961bb6c2fc9581356a8d36f0031bf74f9688909a-runc.gQXhcc.mount: Deactivated successfully. Dec 13 01:58:55.554626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6163cdf493bbc2485625d9d961bb6c2fc9581356a8d36f0031bf74f9688909a-rootfs.mount: Deactivated successfully. Dec 13 01:58:55.899010 kubelet[2013]: E1213 01:58:55.898885 2013 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:58:56.037953 kubelet[2013]: E1213 01:58:56.037928 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:56.041112 env[1209]: time="2024-12-13T01:58:56.041066880Z" level=info msg="CreateContainer within sandbox \"9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:58:56.125344 env[1209]: time="2024-12-13T01:58:56.125270274Z" level=info msg="CreateContainer within sandbox \"9815e4d0a10935b5afe23b0640edb3d060b2b87f54ee78ff1906c288138c2b0a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8d62699de3435d0477e4d7e27d34b3cdc7389647aa4282b6047eb1f59fc23390\"" Dec 13 01:58:56.125873 env[1209]: time="2024-12-13T01:58:56.125835197Z" level=info msg="StartContainer for \"8d62699de3435d0477e4d7e27d34b3cdc7389647aa4282b6047eb1f59fc23390\"" Dec 13 01:58:56.142816 systemd[1]: Started cri-containerd-8d62699de3435d0477e4d7e27d34b3cdc7389647aa4282b6047eb1f59fc23390.scope. Dec 13 01:58:56.167982 env[1209]: time="2024-12-13T01:58:56.167942120Z" level=info msg="StartContainer for \"8d62699de3435d0477e4d7e27d34b3cdc7389647aa4282b6047eb1f59fc23390\" returns successfully" Dec 13 01:58:56.436667 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:58:56.554317 systemd[1]: run-containerd-runc-k8s.io-8d62699de3435d0477e4d7e27d34b3cdc7389647aa4282b6047eb1f59fc23390-runc.RgPY1x.mount: Deactivated successfully. Dec 13 01:58:56.854326 kubelet[2013]: E1213 01:58:56.854203 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:57.043005 kubelet[2013]: E1213 01:58:57.042973 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:58.630291 kubelet[2013]: E1213 01:58:58.630254 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:58.853586 kubelet[2013]: E1213 01:58:58.853550 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:59.085146 systemd-networkd[1032]: lxc_health: Link UP Dec 13 01:58:59.092281 systemd-networkd[1032]: lxc_health: Gained carrier Dec 13 01:58:59.092714 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 01:59:00.111818 systemd-networkd[1032]: lxc_health: Gained IPv6LL Dec 13 01:59:00.630845 kubelet[2013]: E1213 01:59:00.630790 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:00.661164 kubelet[2013]: I1213 01:59:00.660458 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-shbj4" podStartSLOduration=8.660438484 podStartE2EDuration="8.660438484s" podCreationTimestamp="2024-12-13 01:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:57.055064947 +0000 UTC m=+96.271598690" watchObservedRunningTime="2024-12-13 01:59:00.660438484 +0000 UTC m=+99.876972227" Dec 13 01:59:01.050202 kubelet[2013]: E1213 01:59:01.050168 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:02.052219 kubelet[2013]: E1213 01:59:02.052179 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:04.839316 systemd[1]: run-containerd-runc-k8s.io-8d62699de3435d0477e4d7e27d34b3cdc7389647aa4282b6047eb1f59fc23390-runc.UUCdSk.mount: Deactivated successfully. Dec 13 01:59:04.879000 kubelet[2013]: E1213 01:59:04.878947 2013 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37774->127.0.0.1:41313: write tcp 127.0.0.1:37774->127.0.0.1:41313: write: broken pipe Dec 13 01:59:04.881534 sshd[3826]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:04.883532 systemd[1]: sshd@26-10.0.0.117:22-10.0.0.1:47546.service: Deactivated successfully. Dec 13 01:59:04.884254 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:59:04.884781 systemd-logind[1198]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:59:04.885458 systemd-logind[1198]: Removed session 27.