Dec 13 14:23:30.054737 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024
Dec 13 14:23:30.054758 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e
Dec 13 14:23:30.054768 kernel: BIOS-provided physical RAM map:
Dec 13 14:23:30.054774 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Dec 13 14:23:30.054779 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable
Dec 13 14:23:30.054784 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS
Dec 13 14:23:30.054791 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable
Dec 13 14:23:30.054797 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS
Dec 13 14:23:30.054802 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable
Dec 13 14:23:30.054809 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS
Dec 13 14:23:30.054815 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable
Dec 13 14:23:30.054820 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved
Dec 13 14:23:30.054826 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data
Dec 13 14:23:30.054831 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS
Dec 13 14:23:30.054838 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable
Dec 13 14:23:30.054845 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved
Dec 13 14:23:30.054851 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS
Dec 13 14:23:30.054857 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Dec 13 14:23:30.054863 kernel: NX (Execute Disable) protection: active
Dec 13 14:23:30.054869 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable
Dec 13 14:23:30.054875 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable
Dec 13 14:23:30.054881 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable
Dec 13 14:23:30.054887 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable
Dec 13 14:23:30.054892 kernel: extended physical RAM map:
Dec 13 14:23:30.054898 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable
Dec 13 14:23:30.054905 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable
Dec 13 14:23:30.054911 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS
Dec 13 14:23:30.054917 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable
Dec 13 14:23:30.054923 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS
Dec 13 14:23:30.054929 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable
Dec 13 14:23:30.054935 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS
Dec 13 14:23:30.054941 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable
Dec 13 14:23:30.054946 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable
Dec 13 14:23:30.054952 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable
Dec 13 14:23:30.054958 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable
Dec 13 14:23:30.054964 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable
Dec 13 14:23:30.054982 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved
Dec 13 14:23:30.054988 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data
Dec 13 14:23:30.054994 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS
Dec 13 14:23:30.055000 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable
Dec 13 14:23:30.055009 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved
Dec 13 14:23:30.055015 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS
Dec 13 14:23:30.055029 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Dec 13 14:23:30.055036 kernel: efi: EFI v2.70 by EDK II
Dec 13 14:23:30.055043 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 
Dec 13 14:23:30.055050 kernel: random: crng init done
Dec 13 14:23:30.055056 kernel: SMBIOS 2.8 present.
Dec 13 14:23:30.055063 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
Dec 13 14:23:30.055069 kernel: Hypervisor detected: KVM
Dec 13 14:23:30.055075 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 13 14:23:30.055082 kernel: kvm-clock: cpu 0, msr 5a19a001, primary cpu clock
Dec 13 14:23:30.055088 kernel: kvm-clock: using sched offset of 5160430574 cycles
Dec 13 14:23:30.055096 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 13 14:23:30.055103 kernel: tsc: Detected 2794.748 MHz processor
Dec 13 14:23:30.055110 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 14:23:30.055117 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 14:23:30.055123 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000
Dec 13 14:23:30.055130 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 14:23:30.055136 kernel: Using GB pages for direct mapping
Dec 13 14:23:30.055143 kernel: Secure boot disabled
Dec 13 14:23:30.055149 kernel: ACPI: Early table checksum verification disabled
Dec 13 14:23:30.055157 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS )
Dec 13 14:23:30.055164 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS  BXPC     00000001      01000013)
Dec 13 14:23:30.055170 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 14:23:30.055177 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 14:23:30.055183 kernel: ACPI: FACS 0x000000009CBDD000 000040
Dec 13 14:23:30.055190 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 14:23:30.055197 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 14:23:30.055203 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 14:23:30.055210 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 14:23:30.055218 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL  EDK2     00000002      01000013)
Dec 13 14:23:30.055224 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3]
Dec 13 14:23:30.055231 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7]
Dec 13 14:23:30.055237 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f]
Dec 13 14:23:30.055244 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f]
Dec 13 14:23:30.055250 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037]
Dec 13 14:23:30.055257 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b]
Dec 13 14:23:30.055263 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027]
Dec 13 14:23:30.055270 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037]
Dec 13 14:23:30.055277 kernel: No NUMA configuration found
Dec 13 14:23:30.055284 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff]
Dec 13 14:23:30.055290 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff]
Dec 13 14:23:30.055297 kernel: Zone ranges:
Dec 13 14:23:30.055304 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 14:23:30.055310 kernel:   DMA32    [mem 0x0000000001000000-0x000000009cf3ffff]
Dec 13 14:23:30.055317 kernel:   Normal   empty
Dec 13 14:23:30.055323 kernel: Movable zone start for each node
Dec 13 14:23:30.055330 kernel: Early memory node ranges
Dec 13 14:23:30.055338 kernel:   node   0: [mem 0x0000000000001000-0x000000000009ffff]
Dec 13 14:23:30.055344 kernel:   node   0: [mem 0x0000000000100000-0x00000000007fffff]
Dec 13 14:23:30.055350 kernel:   node   0: [mem 0x0000000000808000-0x000000000080afff]
Dec 13 14:23:30.055357 kernel:   node   0: [mem 0x000000000080c000-0x000000000080ffff]
Dec 13 14:23:30.055363 kernel:   node   0: [mem 0x0000000000900000-0x000000009c8eefff]
Dec 13 14:23:30.055370 kernel:   node   0: [mem 0x000000009cbff000-0x000000009cf3ffff]
Dec 13 14:23:30.055376 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff]
Dec 13 14:23:30.055383 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 14:23:30.055389 kernel: On node 0, zone DMA: 96 pages in unavailable ranges
Dec 13 14:23:30.055396 kernel: On node 0, zone DMA: 8 pages in unavailable ranges
Dec 13 14:23:30.055403 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 14:23:30.055410 kernel: On node 0, zone DMA: 240 pages in unavailable ranges
Dec 13 14:23:30.055416 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges
Dec 13 14:23:30.055423 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges
Dec 13 14:23:30.055429 kernel: ACPI: PM-Timer IO Port: 0x608
Dec 13 14:23:30.055436 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 13 14:23:30.055442 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 13 14:23:30.055449 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 13 14:23:30.055455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 13 14:23:30.055463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 14:23:30.055470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 13 14:23:30.055476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 13 14:23:30.055483 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 14:23:30.055489 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Dec 13 14:23:30.055496 kernel: TSC deadline timer available
Dec 13 14:23:30.055502 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs
Dec 13 14:23:30.055508 kernel: kvm-guest: KVM setup pv remote TLB flush
Dec 13 14:23:30.055515 kernel: kvm-guest: setup PV sched yield
Dec 13 14:23:30.055523 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices
Dec 13 14:23:30.055529 kernel: Booting paravirtualized kernel on KVM
Dec 13 14:23:30.055540 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 14:23:30.055549 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1
Dec 13 14:23:30.055556 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288
Dec 13 14:23:30.055563 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152
Dec 13 14:23:30.055569 kernel: pcpu-alloc: [0] 0 1 2 3 
Dec 13 14:23:30.055576 kernel: kvm-guest: setup async PF for cpu 0
Dec 13 14:23:30.055583 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0
Dec 13 14:23:30.055590 kernel: kvm-guest: PV spinlocks enabled
Dec 13 14:23:30.055596 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Dec 13 14:23:30.055603 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 629759
Dec 13 14:23:30.055612 kernel: Policy zone: DMA32
Dec 13 14:23:30.055619 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e
Dec 13 14:23:30.055627 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Dec 13 14:23:30.055634 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 13 14:23:30.055642 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 13 14:23:30.055649 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 14:23:30.055656 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 169308K reserved, 0K cma-reserved)
Dec 13 14:23:30.055663 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Dec 13 14:23:30.055670 kernel: ftrace: allocating 34549 entries in 135 pages
Dec 13 14:23:30.055676 kernel: ftrace: allocated 135 pages with 4 groups
Dec 13 14:23:30.055683 kernel: rcu: Hierarchical RCU implementation.
Dec 13 14:23:30.055691 kernel: rcu:         RCU event tracing is enabled.
Dec 13 14:23:30.055698 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Dec 13 14:23:30.055706 kernel:         Rude variant of Tasks RCU enabled.
Dec 13 14:23:30.055713 kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 14:23:30.055720 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 14:23:30.055727 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Dec 13 14:23:30.055733 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16
Dec 13 14:23:30.055740 kernel: Console: colour dummy device 80x25
Dec 13 14:23:30.055747 kernel: printk: console [ttyS0] enabled
Dec 13 14:23:30.055754 kernel: ACPI: Core revision 20210730
Dec 13 14:23:30.055761 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
Dec 13 14:23:30.055769 kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 14:23:30.055776 kernel: x2apic enabled
Dec 13 14:23:30.055783 kernel: Switched APIC routing to physical x2apic.
Dec 13 14:23:30.055790 kernel: kvm-guest: setup PV IPIs
Dec 13 14:23:30.055797 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Dec 13 14:23:30.055804 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 13 14:23:30.055810 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748)
Dec 13 14:23:30.055818 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 13 14:23:30.055824 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 13 14:23:30.055832 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 13 14:23:30.055839 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 14:23:30.055846 kernel: Spectre V2 : Mitigation: Retpolines
Dec 13 14:23:30.055853 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 13 14:23:30.055860 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Dec 13 14:23:30.055867 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 13 14:23:30.055874 kernel: RETBleed: Mitigation: untrained return thunk
Dec 13 14:23:30.055881 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 13 14:23:30.055888 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Dec 13 14:23:30.055896 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 14:23:30.055903 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 14:23:30.055910 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 14:23:30.055917 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 14:23:30.055924 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Dec 13 14:23:30.055931 kernel: Freeing SMP alternatives memory: 32K
Dec 13 14:23:30.055938 kernel: pid_max: default: 32768 minimum: 301
Dec 13 14:23:30.055944 kernel: LSM: Security Framework initializing
Dec 13 14:23:30.055951 kernel: SELinux:  Initializing.
Dec 13 14:23:30.055959 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Dec 13 14:23:30.055983 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Dec 13 14:23:30.055990 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 13 14:23:30.055997 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 13 14:23:30.056004 kernel: ... version:                0
Dec 13 14:23:30.056011 kernel: ... bit width:              48
Dec 13 14:23:30.056018 kernel: ... generic registers:      6
Dec 13 14:23:30.056032 kernel: ... value mask:             0000ffffffffffff
Dec 13 14:23:30.056040 kernel: ... max period:             00007fffffffffff
Dec 13 14:23:30.056048 kernel: ... fixed-purpose events:   0
Dec 13 14:23:30.056055 kernel: ... event mask:             000000000000003f
Dec 13 14:23:30.056062 kernel: signal: max sigframe size: 1776
Dec 13 14:23:30.056068 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 14:23:30.056075 kernel: smp: Bringing up secondary CPUs ...
Dec 13 14:23:30.056082 kernel: x86: Booting SMP configuration:
Dec 13 14:23:30.056089 kernel: .... node  #0, CPUs:      #1
Dec 13 14:23:30.056096 kernel: kvm-clock: cpu 1, msr 5a19a041, secondary cpu clock
Dec 13 14:23:30.056102 kernel: kvm-guest: setup async PF for cpu 1
Dec 13 14:23:30.056111 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0
Dec 13 14:23:30.056117 kernel:  #2
Dec 13 14:23:30.056124 kernel: kvm-clock: cpu 2, msr 5a19a081, secondary cpu clock
Dec 13 14:23:30.056131 kernel: kvm-guest: setup async PF for cpu 2
Dec 13 14:23:30.056138 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0
Dec 13 14:23:30.056145 kernel:  #3
Dec 13 14:23:30.056152 kernel: kvm-clock: cpu 3, msr 5a19a0c1, secondary cpu clock
Dec 13 14:23:30.056158 kernel: kvm-guest: setup async PF for cpu 3
Dec 13 14:23:30.056165 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0
Dec 13 14:23:30.056173 kernel: smp: Brought up 1 node, 4 CPUs
Dec 13 14:23:30.056180 kernel: smpboot: Max logical packages: 1
Dec 13 14:23:30.056187 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS)
Dec 13 14:23:30.056194 kernel: devtmpfs: initialized
Dec 13 14:23:30.056201 kernel: x86/mm: Memory block size: 128MB
Dec 13 14:23:30.056208 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes)
Dec 13 14:23:30.056215 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes)
Dec 13 14:23:30.056222 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes)
Dec 13 14:23:30.056229 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes)
Dec 13 14:23:30.056237 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes)
Dec 13 14:23:30.056244 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 14:23:30.056250 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Dec 13 14:23:30.056257 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 14:23:30.056264 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 14:23:30.056271 kernel: audit: initializing netlink subsys (disabled)
Dec 13 14:23:30.056278 kernel: audit: type=2000 audit(1734099809.035:1): state=initialized audit_enabled=0 res=1
Dec 13 14:23:30.056285 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 14:23:30.056291 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 14:23:30.056299 kernel: cpuidle: using governor menu
Dec 13 14:23:30.056306 kernel: ACPI: bus type PCI registered
Dec 13 14:23:30.056313 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 14:23:30.056320 kernel: dca service started, version 1.12.1
Dec 13 14:23:30.056327 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000)
Dec 13 14:23:30.056334 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820
Dec 13 14:23:30.056341 kernel: PCI: Using configuration type 1 for base access
Dec 13 14:23:30.056348 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 14:23:30.056355 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 14:23:30.056363 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 14:23:30.056370 kernel: ACPI: Added _OSI(Module Device)
Dec 13 14:23:30.056376 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 14:23:30.056383 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 14:23:30.056390 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 14:23:30.056397 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Dec 13 14:23:30.056404 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Dec 13 14:23:30.056411 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Dec 13 14:23:30.056417 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 13 14:23:30.056426 kernel: ACPI: Interpreter enabled
Dec 13 14:23:30.056432 kernel: ACPI: PM: (supports S0 S3 S5)
Dec 13 14:23:30.056439 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 14:23:30.056446 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 14:23:30.056453 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Dec 13 14:23:30.056460 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 13 14:23:30.056573 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Dec 13 14:23:30.056646 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR]
Dec 13 14:23:30.056717 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability]
Dec 13 14:23:30.056727 kernel: PCI host bridge to bus 0000:00
Dec 13 14:23:30.056814 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 14:23:30.056877 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 14:23:30.056939 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 14:23:30.057015 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window]
Dec 13 14:23:30.057086 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 13 14:23:30.057152 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window]
Dec 13 14:23:30.057213 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 13 14:23:30.057291 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000
Dec 13 14:23:30.057375 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000
Dec 13 14:23:30.057444 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref]
Dec 13 14:23:30.057535 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff]
Dec 13 14:23:30.057629 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref]
Dec 13 14:23:30.061499 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb
Dec 13 14:23:30.061595 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 14:23:30.061675 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00
Dec 13 14:23:30.061750 kernel: pci 0000:00:02.0: reg 0x10: [io  0x6100-0x611f]
Dec 13 14:23:30.061819 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff]
Dec 13 14:23:30.061885 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref]
Dec 13 14:23:30.061962 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000
Dec 13 14:23:30.062057 kernel: pci 0000:00:03.0: reg 0x10: [io  0x6000-0x607f]
Dec 13 14:23:30.062125 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff]
Dec 13 14:23:30.062192 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref]
Dec 13 14:23:30.062267 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Dec 13 14:23:30.062333 kernel: pci 0000:00:04.0: reg 0x10: [io  0x60e0-0x60ff]
Dec 13 14:23:30.062400 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff]
Dec 13 14:23:30.062476 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref]
Dec 13 14:23:30.062542 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref]
Dec 13 14:23:30.062618 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100
Dec 13 14:23:30.062687 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Dec 13 14:23:30.062786 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601
Dec 13 14:23:30.062854 kernel: pci 0000:00:1f.2: reg 0x20: [io  0x60c0-0x60df]
Dec 13 14:23:30.062921 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff]
Dec 13 14:23:30.063057 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500
Dec 13 14:23:30.063168 kernel: pci 0000:00:1f.3: reg 0x20: [io  0x6080-0x60bf]
Dec 13 14:23:30.063181 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 13 14:23:30.063190 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 13 14:23:30.063199 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 13 14:23:30.063208 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 13 14:23:30.063216 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Dec 13 14:23:30.063225 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Dec 13 14:23:30.063238 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Dec 13 14:23:30.063246 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Dec 13 14:23:30.063255 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Dec 13 14:23:30.063263 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Dec 13 14:23:30.063272 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Dec 13 14:23:30.063280 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Dec 13 14:23:30.063289 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Dec 13 14:23:30.063297 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Dec 13 14:23:30.063306 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Dec 13 14:23:30.063318 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Dec 13 14:23:30.063327 kernel: iommu: Default domain type: Translated 
Dec 13 14:23:30.063335 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Dec 13 14:23:30.063434 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Dec 13 14:23:30.063528 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 14:23:30.063608 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Dec 13 14:23:30.063619 kernel: vgaarb: loaded
Dec 13 14:23:30.063626 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 14:23:30.063633 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 14:23:30.063642 kernel: PTP clock support registered
Dec 13 14:23:30.063649 kernel: Registered efivars operations
Dec 13 14:23:30.063657 kernel: PCI: Using ACPI for IRQ routing
Dec 13 14:23:30.063666 kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 13 14:23:30.063675 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff]
Dec 13 14:23:30.063684 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff]
Dec 13 14:23:30.063693 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff]
Dec 13 14:23:30.063702 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff]
Dec 13 14:23:30.063708 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff]
Dec 13 14:23:30.063717 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff]
Dec 13 14:23:30.063724 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Dec 13 14:23:30.063731 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Dec 13 14:23:30.063738 kernel: clocksource: Switched to clocksource kvm-clock
Dec 13 14:23:30.063744 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 14:23:30.063751 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 14:23:30.063758 kernel: pnp: PnP ACPI init
Dec 13 14:23:30.063838 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved
Dec 13 14:23:30.063851 kernel: pnp: PnP ACPI: found 6 devices
Dec 13 14:23:30.063858 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 14:23:30.063879 kernel: NET: Registered PF_INET protocol family
Dec 13 14:23:30.063900 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 14:23:30.063915 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Dec 13 14:23:30.063922 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 14:23:30.063929 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 13 14:23:30.063936 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear)
Dec 13 14:23:30.063945 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Dec 13 14:23:30.063952 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Dec 13 14:23:30.063959 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Dec 13 14:23:30.063978 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 14:23:30.063985 kernel: NET: Registered PF_XDP protocol family
Dec 13 14:23:30.064092 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window
Dec 13 14:23:30.064191 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref]
Dec 13 14:23:30.064256 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 14:23:30.064320 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 14:23:30.064380 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 14:23:30.064453 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window]
Dec 13 14:23:30.064515 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Dec 13 14:23:30.064618 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window]
Dec 13 14:23:30.064629 kernel: PCI: CLS 0 bytes, default 64
Dec 13 14:23:30.064636 kernel: Initialise system trusted keyrings
Dec 13 14:23:30.064643 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Dec 13 14:23:30.064650 kernel: Key type asymmetric registered
Dec 13 14:23:30.064660 kernel: Asymmetric key parser 'x509' registered
Dec 13 14:23:30.064667 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Dec 13 14:23:30.064683 kernel: io scheduler mq-deadline registered
Dec 13 14:23:30.064692 kernel: io scheduler kyber registered
Dec 13 14:23:30.064699 kernel: io scheduler bfq registered
Dec 13 14:23:30.064706 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Dec 13 14:23:30.064715 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Dec 13 14:23:30.064722 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Dec 13 14:23:30.064729 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Dec 13 14:23:30.064738 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 14:23:30.064745 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 14:23:30.064753 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 13 14:23:30.064760 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 13 14:23:30.064767 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 13 14:23:30.064774 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Dec 13 14:23:30.064848 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 13 14:23:30.064911 kernel: rtc_cmos 00:04: registered as rtc0
Dec 13 14:23:30.064990 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:23:29 UTC (1734099809)
Dec 13 14:23:30.065065 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs
Dec 13 14:23:30.065075 kernel: efifb: probing for efifb
Dec 13 14:23:30.065082 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k
Dec 13 14:23:30.065090 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1
Dec 13 14:23:30.065097 kernel: efifb: scrolling: redraw
Dec 13 14:23:30.065104 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Dec 13 14:23:30.065112 kernel: Console: switching to colour frame buffer device 160x50
Dec 13 14:23:30.065119 kernel: fb0: EFI VGA frame buffer device
Dec 13 14:23:30.065129 kernel: pstore: Registered efi as persistent store backend
Dec 13 14:23:30.065136 kernel: NET: Registered PF_INET6 protocol family
Dec 13 14:23:30.065144 kernel: Segment Routing with IPv6
Dec 13 14:23:30.065153 kernel: In-situ OAM (IOAM) with IPv6
Dec 13 14:23:30.065160 kernel: NET: Registered PF_PACKET protocol family
Dec 13 14:23:30.065167 kernel: Key type dns_resolver registered
Dec 13 14:23:30.065176 kernel: IPI shorthand broadcast: enabled
Dec 13 14:23:30.065184 kernel: sched_clock: Marking stable (735001610, 177078111)->(1129592639, -217512918)
Dec 13 14:23:30.065192 kernel: registered taskstats version 1
Dec 13 14:23:30.065199 kernel: Loading compiled-in X.509 certificates
Dec 13 14:23:30.065206 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115'
Dec 13 14:23:30.065214 kernel: Key type .fscrypt registered
Dec 13 14:23:30.065222 kernel: Key type fscrypt-provisioning registered
Dec 13 14:23:30.065229 kernel: pstore: Using crash dump compression: deflate
Dec 13 14:23:30.065238 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 14:23:30.065245 kernel: ima: Allocated hash algorithm: sha1
Dec 13 14:23:30.065252 kernel: ima: No architecture policies found
Dec 13 14:23:30.065260 kernel: clk: Disabling unused clocks
Dec 13 14:23:30.065267 kernel: Freeing unused kernel image (initmem) memory: 47472K
Dec 13 14:23:30.065275 kernel: Write protecting the kernel read-only data: 28672k
Dec 13 14:23:30.065282 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Dec 13 14:23:30.065290 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K
Dec 13 14:23:30.065297 kernel: Run /init as init process
Dec 13 14:23:30.065305 kernel:   with arguments:
Dec 13 14:23:30.065313 kernel:     /init
Dec 13 14:23:30.065320 kernel:   with environment:
Dec 13 14:23:30.065327 kernel:     HOME=/
Dec 13 14:23:30.065334 kernel:     TERM=linux
Dec 13 14:23:30.065341 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Dec 13 14:23:30.065351 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 14:23:30.065360 systemd[1]: Detected virtualization kvm.
Dec 13 14:23:30.065369 systemd[1]: Detected architecture x86-64.
Dec 13 14:23:30.065377 systemd[1]: Running in initrd.
Dec 13 14:23:30.065387 systemd[1]: No hostname configured, using default hostname.
Dec 13 14:23:30.065397 systemd[1]: Hostname set to <localhost>.
Dec 13 14:23:30.065408 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 14:23:30.065416 systemd[1]: Queued start job for default target initrd.target.
Dec 13 14:23:30.065424 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 14:23:30.065432 systemd[1]: Reached target cryptsetup.target.
Dec 13 14:23:30.065441 systemd[1]: Reached target paths.target.
Dec 13 14:23:30.065449 systemd[1]: Reached target slices.target.
Dec 13 14:23:30.065457 systemd[1]: Reached target swap.target.
Dec 13 14:23:30.065464 systemd[1]: Reached target timers.target.
Dec 13 14:23:30.065472 systemd[1]: Listening on iscsid.socket.
Dec 13 14:23:30.065480 systemd[1]: Listening on iscsiuio.socket.
Dec 13 14:23:30.065487 systemd[1]: Listening on systemd-journald-audit.socket.
Dec 13 14:23:30.065495 systemd[1]: Listening on systemd-journald-dev-log.socket.
Dec 13 14:23:30.065504 systemd[1]: Listening on systemd-journald.socket.
Dec 13 14:23:30.065512 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 14:23:30.065520 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 14:23:30.065527 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 14:23:30.065535 systemd[1]: Reached target sockets.target.
Dec 13 14:23:30.065543 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 14:23:30.065551 systemd[1]: Finished network-cleanup.service.
Dec 13 14:23:30.065558 systemd[1]: Starting systemd-fsck-usr.service...
Dec 13 14:23:30.065566 systemd[1]: Starting systemd-journald.service...
Dec 13 14:23:30.065575 systemd[1]: Starting systemd-modules-load.service...
Dec 13 14:23:30.065583 systemd[1]: Starting systemd-resolved.service...
Dec 13 14:23:30.065591 systemd[1]: Starting systemd-vconsole-setup.service...
Dec 13 14:23:30.065598 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 14:23:30.065606 systemd[1]: Finished systemd-fsck-usr.service.
Dec 13 14:23:30.065614 kernel: audit: type=1130 audit(1734099810.054:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.065622 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Dec 13 14:23:30.065629 systemd[1]: Finished systemd-vconsole-setup.service.
Dec 13 14:23:30.065640 systemd-journald[198]: Journal started
Dec 13 14:23:30.065680 systemd-journald[198]: Runtime Journal (/run/log/journal/97d50b65e54740f5ba92564834ed0ecc) is 6.0M, max 48.4M, 42.4M free.
Dec 13 14:23:30.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.064151 systemd-modules-load[199]: Inserted module 'overlay'
Dec 13 14:23:30.070818 kernel: audit: type=1130 audit(1734099810.066:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.070836 systemd[1]: Started systemd-journald.service.
Dec 13 14:23:30.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.076007 kernel: audit: type=1130 audit(1734099810.070:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.075614 systemd[1]: Starting dracut-cmdline-ask.service...
Dec 13 14:23:30.079726 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Dec 13 14:23:30.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.083991 kernel: audit: type=1130 audit(1734099810.079:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.091373 systemd-resolved[200]: Positive Trust Anchors:
Dec 13 14:23:30.091397 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 14:23:30.091424 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 14:23:30.101619 systemd-resolved[200]: Defaulting to hostname 'linux'.
Dec 13 14:23:30.103819 systemd[1]: Started systemd-resolved.service.
Dec 13 14:23:30.104347 systemd[1]: Reached target nss-lookup.target.
Dec 13 14:23:30.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.107990 kernel: audit: type=1130 audit(1734099810.102:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.151015 systemd[1]: Finished dracut-cmdline-ask.service.
Dec 13 14:23:30.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.153723 systemd[1]: Starting dracut-cmdline.service...
Dec 13 14:23:30.157145 kernel: audit: type=1130 audit(1734099810.151:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.161000 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 14:23:30.162444 dracut-cmdline[218]: dracut-dracut-053
Dec 13 14:23:30.164396 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e
Dec 13 14:23:30.170998 kernel: Bridge firewalling registered
Dec 13 14:23:30.170963 systemd-modules-load[199]: Inserted module 'br_netfilter'
Dec 13 14:23:30.189003 kernel: SCSI subsystem initialized
Dec 13 14:23:30.200365 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 14:23:30.200412 kernel: device-mapper: uevent: version 1.0.3
Dec 13 14:23:30.200426 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Dec 13 14:23:30.204481 systemd-modules-load[199]: Inserted module 'dm_multipath'
Dec 13 14:23:30.205181 systemd[1]: Finished systemd-modules-load.service.
Dec 13 14:23:30.211461 kernel: audit: type=1130 audit(1734099810.205:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.207415 systemd[1]: Starting systemd-sysctl.service...
Dec 13 14:23:30.214950 systemd[1]: Finished systemd-sysctl.service.
Dec 13 14:23:30.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.219988 kernel: audit: type=1130 audit(1734099810.216:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.222999 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 14:23:30.283013 kernel: iscsi: registered transport (tcp)
Dec 13 14:23:30.305002 kernel: iscsi: registered transport (qla4xxx)
Dec 13 14:23:30.305067 kernel: QLogic iSCSI HBA Driver
Dec 13 14:23:30.338775 systemd[1]: Finished dracut-cmdline.service.
Dec 13 14:23:30.344062 kernel: audit: type=1130 audit(1734099810.337:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.344064 systemd[1]: Starting dracut-pre-udev.service...
Dec 13 14:23:30.405022 kernel: raid6: avx2x4   gen() 30270 MB/s
Dec 13 14:23:30.422025 kernel: raid6: avx2x4   xor()  7486 MB/s
Dec 13 14:23:30.452027 kernel: raid6: avx2x2   gen() 31870 MB/s
Dec 13 14:23:30.498026 kernel: raid6: avx2x2   xor() 19116 MB/s
Dec 13 14:23:30.544042 kernel: raid6: avx2x1   gen() 25571 MB/s
Dec 13 14:23:30.561067 kernel: raid6: avx2x1   xor() 14415 MB/s
Dec 13 14:23:30.590028 kernel: raid6: sse2x4   gen() 13867 MB/s
Dec 13 14:23:30.627022 kernel: raid6: sse2x4   xor()  6843 MB/s
Dec 13 14:23:30.644031 kernel: raid6: sse2x2   gen() 16148 MB/s
Dec 13 14:23:30.691027 kernel: raid6: sse2x2   xor()  9724 MB/s
Dec 13 14:23:30.713027 kernel: raid6: sse2x1   gen() 10412 MB/s
Dec 13 14:23:30.730430 kernel: raid6: sse2x1   xor()  7412 MB/s
Dec 13 14:23:30.730509 kernel: raid6: using algorithm avx2x2 gen() 31870 MB/s
Dec 13 14:23:30.730522 kernel: raid6: .... xor() 19116 MB/s, rmw enabled
Dec 13 14:23:30.731989 kernel: raid6: using avx2x2 recovery algorithm
Dec 13 14:23:30.758018 kernel: xor: automatically using best checksumming function   avx       
Dec 13 14:23:30.870029 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Dec 13 14:23:30.879090 systemd[1]: Finished dracut-pre-udev.service.
Dec 13 14:23:30.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.879000 audit: BPF prog-id=7 op=LOAD
Dec 13 14:23:30.880000 audit: BPF prog-id=8 op=LOAD
Dec 13 14:23:30.881442 systemd[1]: Starting systemd-udevd.service...
Dec 13 14:23:30.898650 systemd-udevd[400]: Using default interface naming scheme 'v252'.
Dec 13 14:23:30.904820 systemd[1]: Started systemd-udevd.service.
Dec 13 14:23:30.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.907913 systemd[1]: Starting dracut-pre-trigger.service...
Dec 13 14:23:30.920311 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation
Dec 13 14:23:30.952030 systemd[1]: Finished dracut-pre-trigger.service.
Dec 13 14:23:30.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:30.953905 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 14:23:30.989388 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 14:23:30.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:31.045010 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Dec 13 14:23:31.059493 kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 14:23:31.059514 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec 13 14:23:31.059523 kernel: GPT:9289727 != 19775487
Dec 13 14:23:31.059531 kernel: GPT:Alternate GPT header not at the end of the disk.
Dec 13 14:23:31.059540 kernel: GPT:9289727 != 19775487
Dec 13 14:23:31.059548 kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 13 14:23:31.059556 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 14:23:31.067009 kernel: libata version 3.00 loaded.
Dec 13 14:23:31.073683 kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 14:23:31.073727 kernel: AES CTR mode by8 optimization enabled
Dec 13 14:23:31.146867 kernel: ahci 0000:00:1f.2: version 3.0
Dec 13 14:23:31.223314 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Dec 13 14:23:31.223336 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode
Dec 13 14:23:31.223450 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Dec 13 14:23:31.223544 kernel: scsi host0: ahci
Dec 13 14:23:31.223659 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (439)
Dec 13 14:23:31.223673 kernel: scsi host1: ahci
Dec 13 14:23:31.223777 kernel: scsi host2: ahci
Dec 13 14:23:31.223891 kernel: scsi host3: ahci
Dec 13 14:23:31.224032 kernel: scsi host4: ahci
Dec 13 14:23:31.224134 kernel: scsi host5: ahci
Dec 13 14:23:31.224241 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31
Dec 13 14:23:31.224255 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31
Dec 13 14:23:31.224268 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31
Dec 13 14:23:31.224280 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31
Dec 13 14:23:31.224295 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31
Dec 13 14:23:31.224307 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31
Dec 13 14:23:31.163401 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Dec 13 14:23:31.202109 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Dec 13 14:23:31.213676 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Dec 13 14:23:31.222485 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Dec 13 14:23:31.232447 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 14:23:31.233706 systemd[1]: Starting disk-uuid.service...
Dec 13 14:23:31.490820 disk-uuid[531]: Primary Header is updated.
Dec 13 14:23:31.490820 disk-uuid[531]: Secondary Entries is updated.
Dec 13 14:23:31.490820 disk-uuid[531]: Secondary Header is updated.
Dec 13 14:23:31.514331 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 14:23:31.517003 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 14:23:31.534256 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Dec 13 14:23:31.534298 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Dec 13 14:23:31.534307 kernel: ata1: SATA link down (SStatus 0 SControl 300)
Dec 13 14:23:31.534316 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 13 14:23:31.536015 kernel: ata3.00: applying bridge limits
Dec 13 14:23:31.536037 kernel: ata3.00: configured for UDMA/100
Dec 13 14:23:31.575009 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Dec 13 14:23:31.575060 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Dec 13 14:23:31.576011 kernel: scsi 2:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 13 14:23:31.578001 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Dec 13 14:23:31.618022 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 13 14:23:31.635608 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 13 14:23:31.635622 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0
Dec 13 14:23:32.517913 disk-uuid[532]: The operation has completed successfully.
Dec 13 14:23:32.519685 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 14:23:32.588567 systemd[1]: disk-uuid.service: Deactivated successfully.
Dec 13 14:23:32.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:32.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:32.588650 systemd[1]: Finished disk-uuid.service.
Dec 13 14:23:32.589771 systemd[1]: Starting verity-setup.service...
Dec 13 14:23:32.605015 kernel: device-mapper: verity: sha256 using implementation "sha256-ni"
Dec 13 14:23:32.629250 systemd[1]: Found device dev-mapper-usr.device.
Dec 13 14:23:32.631482 systemd[1]: Mounting sysusr-usr.mount...
Dec 13 14:23:32.633602 systemd[1]: Finished verity-setup.service.
Dec 13 14:23:32.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:32.732996 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Dec 13 14:23:32.733437 systemd[1]: Mounted sysusr-usr.mount.
Dec 13 14:23:32.733913 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Dec 13 14:23:32.734704 systemd[1]: Starting ignition-setup.service...
Dec 13 14:23:32.738961 systemd[1]: Starting parse-ip-for-networkd.service...
Dec 13 14:23:32.746041 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 14:23:32.746080 kernel: BTRFS info (device vda6): using free space tree
Dec 13 14:23:32.746092 kernel: BTRFS info (device vda6): has skinny extents
Dec 13 14:23:32.756391 systemd[1]: mnt-oem.mount: Deactivated successfully.
Dec 13 14:23:32.771182 systemd[1]: Finished ignition-setup.service.
Dec 13 14:23:32.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:32.772905 systemd[1]: Starting ignition-fetch-offline.service...
Dec 13 14:23:32.849668 systemd[1]: Finished parse-ip-for-networkd.service.
Dec 13 14:23:32.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:32.872000 audit: BPF prog-id=9 op=LOAD
Dec 13 14:23:32.873708 systemd[1]: Starting systemd-networkd.service...
Dec 13 14:23:32.877830 ignition[643]: Ignition 2.14.0
Dec 13 14:23:32.877846 ignition[643]: Stage: fetch-offline
Dec 13 14:23:32.877919 ignition[643]: no configs at "/usr/lib/ignition/base.d"
Dec 13 14:23:32.877933 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 14:23:32.878118 ignition[643]: parsed url from cmdline: ""
Dec 13 14:23:32.878122 ignition[643]: no config URL provided
Dec 13 14:23:32.878129 ignition[643]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 14:23:32.878139 ignition[643]: no config at "/usr/lib/ignition/user.ign"
Dec 13 14:23:32.878164 ignition[643]: op(1): [started]  loading QEMU firmware config module
Dec 13 14:23:32.878171 ignition[643]: op(1): executing: "modprobe" "qemu_fw_cfg"
Dec 13 14:23:32.883126 ignition[643]: op(1): [finished] loading QEMU firmware config module
Dec 13 14:23:32.905203 systemd-networkd[714]: lo: Link UP
Dec 13 14:23:32.905218 systemd-networkd[714]: lo: Gained carrier
Dec 13 14:23:32.905845 systemd-networkd[714]: Enumeration completed
Dec 13 14:23:32.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:32.906154 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 14:23:32.906336 systemd[1]: Started systemd-networkd.service.
Dec 13 14:23:32.907472 systemd-networkd[714]: eth0: Link UP
Dec 13 14:23:32.907478 systemd-networkd[714]: eth0: Gained carrier
Dec 13 14:23:32.909614 systemd[1]: Reached target network.target.
Dec 13 14:23:32.912629 systemd[1]: Starting iscsiuio.service...
Dec 13 14:23:32.918087 systemd[1]: Started iscsiuio.service.
Dec 13 14:23:32.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:32.920555 systemd[1]: Starting iscsid.service...
Dec 13 14:23:32.924268 iscsid[720]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 14:23:32.924268 iscsid[720]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Dec 13 14:23:32.924268 iscsid[720]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Dec 13 14:23:32.924268 iscsid[720]: If using hardware iscsi like qla4xxx this message can be ignored.
Dec 13 14:23:32.924268 iscsid[720]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 14:23:32.924268 iscsid[720]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Dec 13 14:23:32.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:32.925585 systemd[1]: Started iscsid.service.
Dec 13 14:23:32.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:32.927814 systemd[1]: Starting dracut-initqueue.service...
Dec 13 14:23:32.940578 systemd[1]: Finished dracut-initqueue.service.
Dec 13 14:23:32.942106 systemd[1]: Reached target remote-fs-pre.target.
Dec 13 14:23:32.943130 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 14:23:32.944152 systemd[1]: Reached target remote-fs.target.
Dec 13 14:23:32.946709 systemd[1]: Starting dracut-pre-mount.service...
Dec 13 14:23:32.955529 systemd[1]: Finished dracut-pre-mount.service.
Dec 13 14:23:32.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:32.981916 ignition[643]: parsing config with SHA512: 0ce2b1596e4825e94d2f3efe9cd72a6cec625b53d2af1c1e2762a3e0368f08596de20edcdf546f41a485ff143391e0e3adbfa440f77a59241c6fa087e080ce4e
Dec 13 14:23:32.989694 unknown[643]: fetched base config from "system"
Dec 13 14:23:32.989705 unknown[643]: fetched user config from "qemu"
Dec 13 14:23:32.990139 ignition[643]: fetch-offline: fetch-offline passed
Dec 13 14:23:32.990183 ignition[643]: Ignition finished successfully
Dec 13 14:23:32.992055 systemd-networkd[714]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1
Dec 13 14:23:32.995837 systemd[1]: Finished ignition-fetch-offline.service.
Dec 13 14:23:32.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:32.997760 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Dec 13 14:23:32.999584 systemd[1]: Starting ignition-kargs.service...
Dec 13 14:23:33.009644 ignition[735]: Ignition 2.14.0
Dec 13 14:23:33.010875 ignition[735]: Stage: kargs
Dec 13 14:23:33.011013 ignition[735]: no configs at "/usr/lib/ignition/base.d"
Dec 13 14:23:33.011024 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 14:23:33.015136 ignition[735]: kargs: kargs passed
Dec 13 14:23:33.015185 ignition[735]: Ignition finished successfully
Dec 13 14:23:33.018055 systemd[1]: Finished ignition-kargs.service.
Dec 13 14:23:33.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:33.019871 systemd[1]: Starting ignition-disks.service...
Dec 13 14:23:33.029499 ignition[741]: Ignition 2.14.0
Dec 13 14:23:33.029520 ignition[741]: Stage: disks
Dec 13 14:23:33.029633 ignition[741]: no configs at "/usr/lib/ignition/base.d"
Dec 13 14:23:33.029645 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 14:23:33.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:33.032174 systemd[1]: Finished ignition-disks.service.
Dec 13 14:23:33.030926 ignition[741]: disks: disks passed
Dec 13 14:23:33.034105 systemd[1]: Reached target initrd-root-device.target.
Dec 13 14:23:33.031015 ignition[741]: Ignition finished successfully
Dec 13 14:23:33.036591 systemd[1]: Reached target local-fs-pre.target.
Dec 13 14:23:33.037714 systemd[1]: Reached target local-fs.target.
Dec 13 14:23:33.039755 systemd[1]: Reached target sysinit.target.
Dec 13 14:23:33.040282 systemd[1]: Reached target basic.target.
Dec 13 14:23:33.041832 systemd[1]: Starting systemd-fsck-root.service...
Dec 13 14:23:33.056324 systemd-fsck[749]: ROOT: clean, 621/553520 files, 56021/553472 blocks
Dec 13 14:23:33.064332 systemd[1]: Finished systemd-fsck-root.service.
Dec 13 14:23:33.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:33.065641 systemd[1]: Mounting sysroot.mount...
Dec 13 14:23:33.072895 systemd[1]: Mounted sysroot.mount.
Dec 13 14:23:33.073598 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Dec 13 14:23:33.073440 systemd[1]: Reached target initrd-root-fs.target.
Dec 13 14:23:33.075020 systemd[1]: Mounting sysroot-usr.mount...
Dec 13 14:23:33.076589 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Dec 13 14:23:33.076620 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Dec 13 14:23:33.076640 systemd[1]: Reached target ignition-diskful.target.
Dec 13 14:23:33.078353 systemd[1]: Mounted sysroot-usr.mount.
Dec 13 14:23:33.080216 systemd[1]: Starting initrd-setup-root.service...
Dec 13 14:23:33.088319 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory
Dec 13 14:23:33.091925 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory
Dec 13 14:23:33.096043 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory
Dec 13 14:23:33.100250 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory
Dec 13 14:23:33.126003 systemd[1]: Finished initrd-setup-root.service.
Dec 13 14:23:33.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:33.127748 systemd[1]: Starting ignition-mount.service...
Dec 13 14:23:33.129247 systemd[1]: Starting sysroot-boot.service...
Dec 13 14:23:33.132545 bash[800]: umount: /sysroot/usr/share/oem: not mounted.
Dec 13 14:23:33.140034 ignition[801]: INFO     : Ignition 2.14.0
Dec 13 14:23:33.140034 ignition[801]: INFO     : Stage: mount
Dec 13 14:23:33.143245 ignition[801]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 14:23:33.143245 ignition[801]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 14:23:33.143245 ignition[801]: INFO     : mount: mount passed
Dec 13 14:23:33.143245 ignition[801]: INFO     : Ignition finished successfully
Dec 13 14:23:33.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:33.142174 systemd[1]: Finished ignition-mount.service.
Dec 13 14:23:33.172625 systemd[1]: Finished sysroot-boot.service.
Dec 13 14:23:33.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:33.647439 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Dec 13 14:23:33.656782 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811)
Dec 13 14:23:33.656862 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 14:23:33.656879 kernel: BTRFS info (device vda6): using free space tree
Dec 13 14:23:33.657644 kernel: BTRFS info (device vda6): has skinny extents
Dec 13 14:23:33.662877 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Dec 13 14:23:33.664806 systemd[1]: Starting ignition-files.service...
Dec 13 14:23:33.682077 ignition[831]: INFO     : Ignition 2.14.0
Dec 13 14:23:33.682077 ignition[831]: INFO     : Stage: files
Dec 13 14:23:33.684173 ignition[831]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 14:23:33.684173 ignition[831]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 14:23:33.684173 ignition[831]: DEBUG    : files: compiled without relabeling support, skipping
Dec 13 14:23:33.688053 ignition[831]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Dec 13 14:23:33.688053 ignition[831]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Dec 13 14:23:33.688053 ignition[831]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Dec 13 14:23:33.688053 ignition[831]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Dec 13 14:23:33.694327 ignition[831]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Dec 13 14:23:33.694327 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 14:23:33.694327 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Dec 13 14:23:33.688211 unknown[831]: wrote ssh authorized keys file for user: core
Dec 13 14:23:33.732759 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Dec 13 14:23:33.968412 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 14:23:33.970407 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Dec 13 14:23:33.970407 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Dec 13 14:23:34.180262 systemd-networkd[714]: eth0: Gained IPv6LL
Dec 13 14:23:34.322587 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Dec 13 14:23:34.561605 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Dec 13 14:23:34.561605 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 14:23:34.566035 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1
Dec 13 14:23:34.837759 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Dec 13 14:23:35.436362 ignition[831]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 14:23:35.436362 ignition[831]: INFO     : files: op(c): [started]  processing unit "prepare-helm.service"
Dec 13 14:23:35.440218 ignition[831]: INFO     : files: op(c): op(d): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 14:23:35.440218 ignition[831]: INFO     : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 14:23:35.440218 ignition[831]: INFO     : files: op(c): [finished] processing unit "prepare-helm.service"
Dec 13 14:23:35.440218 ignition[831]: INFO     : files: op(e): [started]  processing unit "coreos-metadata.service"
Dec 13 14:23:35.440218 ignition[831]: INFO     : files: op(e): op(f): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Dec 13 14:23:35.440218 ignition[831]: INFO     : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Dec 13 14:23:35.440218 ignition[831]: INFO     : files: op(e): [finished] processing unit "coreos-metadata.service"
Dec 13 14:23:35.440218 ignition[831]: INFO     : files: op(10): [started]  setting preset to enabled for "prepare-helm.service"
Dec 13 14:23:35.440218 ignition[831]: INFO     : files: op(10): [finished] setting preset to enabled for "prepare-helm.service"
Dec 13 14:23:35.440218 ignition[831]: INFO     : files: op(11): [started]  setting preset to disabled for "coreos-metadata.service"
Dec 13 14:23:35.440218 ignition[831]: INFO     : files: op(11): op(12): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Dec 13 14:23:35.518486 ignition[831]: INFO     : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Dec 13 14:23:35.520363 ignition[831]: INFO     : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service"
Dec 13 14:23:35.520363 ignition[831]: INFO     : files: createResultFile: createFiles: op(13): [started]  writing file "/sysroot/etc/.ignition-result.json"
Dec 13 14:23:35.520363 ignition[831]: INFO     : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json"
Dec 13 14:23:35.520363 ignition[831]: INFO     : files: files passed
Dec 13 14:23:35.520363 ignition[831]: INFO     : Ignition finished successfully
Dec 13 14:23:35.528075 systemd[1]: Finished ignition-files.service.
Dec 13 14:23:35.533926 kernel: kauditd_printk_skb: 24 callbacks suppressed
Dec 13 14:23:35.533956 kernel: audit: type=1130 audit(1734099815.527:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.533999 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Dec 13 14:23:35.536013 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Dec 13 14:23:35.538314 systemd[1]: Starting ignition-quench.service...
Dec 13 14:23:35.541462 initrd-setup-root-after-ignition[856]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory
Dec 13 14:23:35.543139 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 14:23:35.543593 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Dec 13 14:23:35.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.543940 systemd[1]: Reached target ignition-complete.target.
Dec 13 14:23:35.551319 kernel: audit: type=1130 audit(1734099815.542:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.551241 systemd[1]: Starting initrd-parse-etc.service...
Dec 13 14:23:35.553409 systemd[1]: ignition-quench.service: Deactivated successfully.
Dec 13 14:23:35.554445 systemd[1]: Finished ignition-quench.service.
Dec 13 14:23:35.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.561893 kernel: audit: type=1130 audit(1734099815.555:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.561929 kernel: audit: type=1131 audit(1734099815.555:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.564853 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 14:23:35.565887 systemd[1]: Finished initrd-parse-etc.service.
Dec 13 14:23:35.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.567729 systemd[1]: Reached target initrd-fs.target.
Dec 13 14:23:35.575033 kernel: audit: type=1130 audit(1734099815.566:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.575055 kernel: audit: type=1131 audit(1734099815.566:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.575007 systemd[1]: Reached target initrd.target.
Dec 13 14:23:35.576566 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Dec 13 14:23:35.578474 systemd[1]: Starting dracut-pre-pivot.service...
Dec 13 14:23:35.588614 systemd[1]: Finished dracut-pre-pivot.service.
Dec 13 14:23:35.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.591059 systemd[1]: Starting initrd-cleanup.service...
Dec 13 14:23:35.593996 kernel: audit: type=1130 audit(1734099815.589:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.602799 systemd[1]: Stopped target nss-lookup.target.
Dec 13 14:23:35.603404 systemd[1]: Stopped target remote-cryptsetup.target.
Dec 13 14:23:35.605090 systemd[1]: Stopped target timers.target.
Dec 13 14:23:35.606766 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 14:23:35.612298 kernel: audit: type=1131 audit(1734099815.607:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.606950 systemd[1]: Stopped dracut-pre-pivot.service.
Dec 13 14:23:35.608582 systemd[1]: Stopped target initrd.target.
Dec 13 14:23:35.612825 systemd[1]: Stopped target basic.target.
Dec 13 14:23:35.614315 systemd[1]: Stopped target ignition-complete.target.
Dec 13 14:23:35.615461 systemd[1]: Stopped target ignition-diskful.target.
Dec 13 14:23:35.617309 systemd[1]: Stopped target initrd-root-device.target.
Dec 13 14:23:35.618951 systemd[1]: Stopped target remote-fs.target.
Dec 13 14:23:35.620575 systemd[1]: Stopped target remote-fs-pre.target.
Dec 13 14:23:35.622321 systemd[1]: Stopped target sysinit.target.
Dec 13 14:23:35.623848 systemd[1]: Stopped target local-fs.target.
Dec 13 14:23:35.625625 systemd[1]: Stopped target local-fs-pre.target.
Dec 13 14:23:35.627336 systemd[1]: Stopped target swap.target.
Dec 13 14:23:35.634618 kernel: audit: type=1131 audit(1734099815.630:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.629001 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 14:23:35.629171 systemd[1]: Stopped dracut-pre-mount.service.
Dec 13 14:23:35.640915 kernel: audit: type=1131 audit(1734099815.635:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.630758 systemd[1]: Stopped target cryptsetup.target.
Dec 13 14:23:35.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.635358 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 14:23:35.635472 systemd[1]: Stopped dracut-initqueue.service.
Dec 13 14:23:35.637001 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Dec 13 14:23:35.637120 systemd[1]: Stopped ignition-fetch-offline.service.
Dec 13 14:23:35.641500 systemd[1]: Stopped target paths.target.
Dec 13 14:23:35.643198 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 14:23:35.647025 systemd[1]: Stopped systemd-ask-password-console.path.
Dec 13 14:23:35.647853 systemd[1]: Stopped target slices.target.
Dec 13 14:23:35.648372 systemd[1]: Stopped target sockets.target.
Dec 13 14:23:35.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.652490 systemd[1]: iscsid.socket: Deactivated successfully.
Dec 13 14:23:35.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.652602 systemd[1]: Closed iscsid.socket.
Dec 13 14:23:35.654078 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Dec 13 14:23:35.654208 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Dec 13 14:23:35.655869 systemd[1]: ignition-files.service: Deactivated successfully.
Dec 13 14:23:35.655996 systemd[1]: Stopped ignition-files.service.
Dec 13 14:23:35.658703 systemd[1]: Stopping ignition-mount.service...
Dec 13 14:23:35.659678 systemd[1]: Stopping iscsiuio.service...
Dec 13 14:23:35.663766 systemd[1]: Stopping sysroot-boot.service...
Dec 13 14:23:35.668441 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 14:23:35.668658 systemd[1]: Stopped systemd-udev-trigger.service.
Dec 13 14:23:35.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.671469 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 14:23:35.671707 systemd[1]: Stopped dracut-pre-trigger.service.
Dec 13 14:23:35.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.676224 ignition[872]: INFO     : Ignition 2.14.0
Dec 13 14:23:35.676224 ignition[872]: INFO     : Stage: umount
Dec 13 14:23:35.676224 ignition[872]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 14:23:35.676224 ignition[872]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 14:23:35.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.682036 ignition[872]: INFO     : umount: umount passed
Dec 13 14:23:35.682036 ignition[872]: INFO     : Ignition finished successfully
Dec 13 14:23:35.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.676641 systemd[1]: iscsiuio.service: Deactivated successfully.
Dec 13 14:23:35.676766 systemd[1]: Stopped iscsiuio.service.
Dec 13 14:23:35.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.679189 systemd[1]: iscsiuio.socket: Deactivated successfully.
Dec 13 14:23:35.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.679297 systemd[1]: Closed iscsiuio.socket.
Dec 13 14:23:35.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.680623 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 14:23:35.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.680723 systemd[1]: Finished initrd-cleanup.service.
Dec 13 14:23:35.690089 systemd[1]: ignition-mount.service: Deactivated successfully.
Dec 13 14:23:35.690194 systemd[1]: Stopped ignition-mount.service.
Dec 13 14:23:35.694527 systemd[1]: Stopped target network.target.
Dec 13 14:23:35.696015 systemd[1]: ignition-disks.service: Deactivated successfully.
Dec 13 14:23:35.696086 systemd[1]: Stopped ignition-disks.service.
Dec 13 14:23:35.697775 systemd[1]: ignition-kargs.service: Deactivated successfully.
Dec 13 14:23:35.697821 systemd[1]: Stopped ignition-kargs.service.
Dec 13 14:23:35.698474 systemd[1]: ignition-setup.service: Deactivated successfully.
Dec 13 14:23:35.698514 systemd[1]: Stopped ignition-setup.service.
Dec 13 14:23:35.701152 systemd[1]: Stopping systemd-networkd.service...
Dec 13 14:23:35.701614 systemd[1]: Stopping systemd-resolved.service...
Dec 13 14:23:35.704724 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Dec 13 14:23:35.715030 systemd-networkd[714]: eth0: DHCPv6 lease lost
Dec 13 14:23:35.717028 systemd[1]: systemd-networkd.service: Deactivated successfully.
Dec 13 14:23:35.717137 systemd[1]: Stopped systemd-networkd.service.
Dec 13 14:23:35.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.720440 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Dec 13 14:23:35.720470 systemd[1]: Closed systemd-networkd.socket.
Dec 13 14:23:35.723346 systemd[1]: Stopping network-cleanup.service...
Dec 13 14:23:35.723665 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Dec 13 14:23:35.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.723710 systemd[1]: Stopped parse-ip-for-networkd.service.
Dec 13 14:23:35.725000 audit: BPF prog-id=9 op=UNLOAD
Dec 13 14:23:35.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.725658 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 14:23:35.725700 systemd[1]: Stopped systemd-sysctl.service.
Dec 13 14:23:35.731427 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 14:23:35.731480 systemd[1]: Stopped systemd-modules-load.service.
Dec 13 14:23:35.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.733395 systemd[1]: Stopping systemd-udevd.service...
Dec 13 14:23:35.736290 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 14:23:35.736713 systemd[1]: systemd-resolved.service: Deactivated successfully.
Dec 13 14:23:35.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.736790 systemd[1]: Stopped systemd-resolved.service.
Dec 13 14:23:35.742519 systemd[1]: network-cleanup.service: Deactivated successfully.
Dec 13 14:23:35.742000 audit: BPF prog-id=6 op=UNLOAD
Dec 13 14:23:35.743582 systemd[1]: Stopped network-cleanup.service.
Dec 13 14:23:35.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.746011 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 14:23:35.747076 systemd[1]: Stopped systemd-udevd.service.
Dec 13 14:23:35.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.748704 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 14:23:35.748741 systemd[1]: Closed systemd-udevd-control.socket.
Dec 13 14:23:35.751383 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 14:23:35.751413 systemd[1]: Closed systemd-udevd-kernel.socket.
Dec 13 14:23:35.753595 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 14:23:35.753633 systemd[1]: Stopped dracut-pre-udev.service.
Dec 13 14:23:35.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.757566 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 14:23:35.757670 systemd[1]: Stopped dracut-cmdline.service.
Dec 13 14:23:35.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.759422 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 14:23:35.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.759465 systemd[1]: Stopped dracut-cmdline-ask.service.
Dec 13 14:23:35.763859 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Dec 13 14:23:35.765992 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 14:23:35.766039 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service.
Dec 13 14:23:35.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.768919 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 14:23:35.768954 systemd[1]: Stopped kmod-static-nodes.service.
Dec 13 14:23:35.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.771583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 14:23:35.771614 systemd[1]: Stopped systemd-vconsole-setup.service.
Dec 13 14:23:35.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.775192 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 13 14:23:35.777117 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 14:23:35.778349 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Dec 13 14:23:35.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.812470 systemd[1]: sysroot-boot.service: Deactivated successfully.
Dec 13 14:23:35.812565 systemd[1]: Stopped sysroot-boot.service.
Dec 13 14:23:35.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.815098 systemd[1]: Reached target initrd-switch-root.target.
Dec 13 14:23:35.816801 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Dec 13 14:23:35.817773 systemd[1]: Stopped initrd-setup-root.service.
Dec 13 14:23:35.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:35.820064 systemd[1]: Starting initrd-switch-root.service...
Dec 13 14:23:35.833433 systemd[1]: Switching root.
Dec 13 14:23:35.850810 iscsid[720]: iscsid shutting down.
Dec 13 14:23:35.851783 systemd-journald[198]: Received SIGTERM from PID 1 (n/a).
Dec 13 14:23:35.851860 systemd-journald[198]: Journal stopped
Dec 13 14:23:40.926575 kernel: SELinux:  Class mctp_socket not defined in policy.
Dec 13 14:23:40.926629 kernel: SELinux:  Class anon_inode not defined in policy.
Dec 13 14:23:40.926644 kernel: SELinux: the above unknown classes and permissions will be allowed
Dec 13 14:23:40.926654 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 14:23:40.926668 kernel: SELinux:  policy capability open_perms=1
Dec 13 14:23:40.926678 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 14:23:40.926693 kernel: SELinux:  policy capability always_check_network=0
Dec 13 14:23:40.926702 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 14:23:40.926714 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 14:23:40.926723 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Dec 13 14:23:40.926732 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Dec 13 14:23:40.926743 systemd[1]: Successfully loaded SELinux policy in 42.047ms.
Dec 13 14:23:40.926761 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.383ms.
Dec 13 14:23:40.926773 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 14:23:40.926793 systemd[1]: Detected virtualization kvm.
Dec 13 14:23:40.926808 systemd[1]: Detected architecture x86-64.
Dec 13 14:23:40.926818 systemd[1]: Detected first boot.
Dec 13 14:23:40.926828 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 14:23:40.926842 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Dec 13 14:23:40.926856 systemd[1]: Populated /etc with preset unit settings.
Dec 13 14:23:40.926868 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 14:23:40.926879 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 14:23:40.926891 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 14:23:40.926901 kernel: kauditd_printk_skb: 47 callbacks suppressed
Dec 13 14:23:40.926910 kernel: audit: type=1334 audit(1734099820.674:85): prog-id=12 op=LOAD
Dec 13 14:23:40.926920 kernel: audit: type=1334 audit(1734099820.674:86): prog-id=3 op=UNLOAD
Dec 13 14:23:40.926929 kernel: audit: type=1334 audit(1734099820.678:87): prog-id=13 op=LOAD
Dec 13 14:23:40.926943 kernel: audit: type=1334 audit(1734099820.681:88): prog-id=14 op=LOAD
Dec 13 14:23:40.926952 kernel: audit: type=1334 audit(1734099820.681:89): prog-id=4 op=UNLOAD
Dec 13 14:23:40.926961 kernel: audit: type=1334 audit(1734099820.681:90): prog-id=5 op=UNLOAD
Dec 13 14:23:40.926990 kernel: audit: type=1334 audit(1734099820.683:91): prog-id=15 op=LOAD
Dec 13 14:23:40.927000 kernel: audit: type=1334 audit(1734099820.683:92): prog-id=12 op=UNLOAD
Dec 13 14:23:40.927009 kernel: audit: type=1334 audit(1734099820.685:93): prog-id=16 op=LOAD
Dec 13 14:23:40.927018 kernel: audit: type=1334 audit(1734099820.688:94): prog-id=17 op=LOAD
Dec 13 14:23:40.927028 systemd[1]: iscsid.service: Deactivated successfully.
Dec 13 14:23:40.927038 systemd[1]: Stopped iscsid.service.
Dec 13 14:23:40.927053 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 13 14:23:40.927064 systemd[1]: Stopped initrd-switch-root.service.
Dec 13 14:23:40.927074 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 14:23:40.927084 systemd[1]: Created slice system-addon\x2dconfig.slice.
Dec 13 14:23:40.927095 systemd[1]: Created slice system-addon\x2drun.slice.
Dec 13 14:23:40.927104 systemd[1]: Created slice system-getty.slice.
Dec 13 14:23:40.927115 systemd[1]: Created slice system-modprobe.slice.
Dec 13 14:23:40.927130 systemd[1]: Created slice system-serial\x2dgetty.slice.
Dec 13 14:23:40.927141 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Dec 13 14:23:40.927151 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Dec 13 14:23:40.927161 systemd[1]: Created slice user.slice.
Dec 13 14:23:40.927171 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 14:23:40.927181 systemd[1]: Started systemd-ask-password-wall.path.
Dec 13 14:23:40.927193 systemd[1]: Set up automount boot.automount.
Dec 13 14:23:40.927205 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Dec 13 14:23:40.927217 systemd[1]: Stopped target initrd-switch-root.target.
Dec 13 14:23:40.927235 systemd[1]: Stopped target initrd-fs.target.
Dec 13 14:23:40.927252 systemd[1]: Stopped target initrd-root-fs.target.
Dec 13 14:23:40.927265 systemd[1]: Reached target integritysetup.target.
Dec 13 14:23:40.927276 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 14:23:40.927287 systemd[1]: Reached target remote-fs.target.
Dec 13 14:23:40.927297 systemd[1]: Reached target slices.target.
Dec 13 14:23:40.927308 systemd[1]: Reached target swap.target.
Dec 13 14:23:40.927318 systemd[1]: Reached target torcx.target.
Dec 13 14:23:40.927328 systemd[1]: Reached target veritysetup.target.
Dec 13 14:23:40.927338 systemd[1]: Listening on systemd-coredump.socket.
Dec 13 14:23:40.927353 systemd[1]: Listening on systemd-initctl.socket.
Dec 13 14:23:40.927364 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 14:23:40.927374 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 14:23:40.927385 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 14:23:40.927395 systemd[1]: Listening on systemd-userdbd.socket.
Dec 13 14:23:40.927405 systemd[1]: Mounting dev-hugepages.mount...
Dec 13 14:23:40.927421 systemd[1]: Mounting dev-mqueue.mount...
Dec 13 14:23:40.927441 systemd[1]: Mounting media.mount...
Dec 13 14:23:40.927456 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:23:40.927476 systemd[1]: Mounting sys-kernel-debug.mount...
Dec 13 14:23:40.927486 systemd[1]: Mounting sys-kernel-tracing.mount...
Dec 13 14:23:40.927496 systemd[1]: Mounting tmp.mount...
Dec 13 14:23:40.927506 systemd[1]: Starting flatcar-tmpfiles.service...
Dec 13 14:23:40.927517 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 14:23:40.927527 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 14:23:40.927537 systemd[1]: Starting modprobe@configfs.service...
Dec 13 14:23:40.927547 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 14:23:40.927557 systemd[1]: Starting modprobe@drm.service...
Dec 13 14:23:40.927571 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 14:23:40.927582 systemd[1]: Starting modprobe@fuse.service...
Dec 13 14:23:40.927591 systemd[1]: Starting modprobe@loop.service...
Dec 13 14:23:40.927602 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Dec 13 14:23:40.927612 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 13 14:23:40.927622 systemd[1]: Stopped systemd-fsck-root.service.
Dec 13 14:23:40.927633 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Dec 13 14:23:40.927643 systemd[1]: Stopped systemd-fsck-usr.service.
Dec 13 14:23:40.927653 kernel: fuse: init (API version 7.34)
Dec 13 14:23:40.927667 kernel: loop: module loaded
Dec 13 14:23:40.927676 systemd[1]: Stopped systemd-journald.service.
Dec 13 14:23:40.927687 systemd[1]: Starting systemd-journald.service...
Dec 13 14:23:40.927696 systemd[1]: Starting systemd-modules-load.service...
Dec 13 14:23:40.927707 systemd[1]: Starting systemd-network-generator.service...
Dec 13 14:23:40.927717 systemd[1]: Starting systemd-remount-fs.service...
Dec 13 14:23:40.927728 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 14:23:40.927747 systemd[1]: verity-setup.service: Deactivated successfully.
Dec 13 14:23:40.927765 systemd[1]: Stopped verity-setup.service.
Dec 13 14:23:40.927800 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:23:40.927828 systemd-journald[994]: Journal started
Dec 13 14:23:40.927874 systemd-journald[994]: Runtime Journal (/run/log/journal/97d50b65e54740f5ba92564834ed0ecc) is 6.0M, max 48.4M, 42.4M free.
Dec 13 14:23:35.920000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 14:23:36.380000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 14:23:36.381000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 14:23:36.381000 audit: BPF prog-id=10 op=LOAD
Dec 13 14:23:36.381000 audit: BPF prog-id=10 op=UNLOAD
Dec 13 14:23:36.381000 audit: BPF prog-id=11 op=LOAD
Dec 13 14:23:36.381000 audit: BPF prog-id=11 op=UNLOAD
Dec 13 14:23:36.417000 audit[905]: AVC avc:  denied  { associate } for  pid=905 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Dec 13 14:23:36.417000 audit[905]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=888 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:23:36.417000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 14:23:36.420000 audit[905]: AVC avc:  denied  { associate } for  pid=905 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Dec 13 14:23:36.420000 audit[905]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079b9 a2=1ed a3=0 items=2 ppid=888 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:23:36.420000 audit: CWD cwd="/"
Dec 13 14:23:36.420000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:36.420000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:36.420000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 14:23:40.674000 audit: BPF prog-id=12 op=LOAD
Dec 13 14:23:40.674000 audit: BPF prog-id=3 op=UNLOAD
Dec 13 14:23:40.678000 audit: BPF prog-id=13 op=LOAD
Dec 13 14:23:40.681000 audit: BPF prog-id=14 op=LOAD
Dec 13 14:23:40.681000 audit: BPF prog-id=4 op=UNLOAD
Dec 13 14:23:40.681000 audit: BPF prog-id=5 op=UNLOAD
Dec 13 14:23:40.683000 audit: BPF prog-id=15 op=LOAD
Dec 13 14:23:40.683000 audit: BPF prog-id=12 op=UNLOAD
Dec 13 14:23:40.685000 audit: BPF prog-id=16 op=LOAD
Dec 13 14:23:40.688000 audit: BPF prog-id=17 op=LOAD
Dec 13 14:23:40.688000 audit: BPF prog-id=13 op=UNLOAD
Dec 13 14:23:40.688000 audit: BPF prog-id=14 op=UNLOAD
Dec 13 14:23:40.689000 audit: BPF prog-id=18 op=LOAD
Dec 13 14:23:40.689000 audit: BPF prog-id=15 op=UNLOAD
Dec 13 14:23:40.689000 audit: BPF prog-id=19 op=LOAD
Dec 13 14:23:40.689000 audit: BPF prog-id=20 op=LOAD
Dec 13 14:23:40.689000 audit: BPF prog-id=16 op=UNLOAD
Dec 13 14:23:40.689000 audit: BPF prog-id=17 op=UNLOAD
Dec 13 14:23:40.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.700000 audit: BPF prog-id=18 op=UNLOAD
Dec 13 14:23:40.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.903000 audit: BPF prog-id=21 op=LOAD
Dec 13 14:23:40.903000 audit: BPF prog-id=22 op=LOAD
Dec 13 14:23:40.903000 audit: BPF prog-id=23 op=LOAD
Dec 13 14:23:40.903000 audit: BPF prog-id=19 op=UNLOAD
Dec 13 14:23:40.903000 audit: BPF prog-id=20 op=UNLOAD
Dec 13 14:23:40.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.924000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Dec 13 14:23:40.924000 audit[994]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd8d2eac20 a2=4000 a3=7ffd8d2eacbc items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:23:40.924000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Dec 13 14:23:36.416500 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 14:23:40.673378 systemd[1]: Queued start job for default target multi-user.target.
Dec 13 14:23:36.416712 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Dec 13 14:23:40.673390 systemd[1]: Unnecessary job was removed for dev-vda6.device.
Dec 13 14:23:36.416732 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Dec 13 14:23:40.691481 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 14:23:36.416766 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Dec 13 14:23:36.416778 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=debug msg="skipped missing lower profile" missing profile=oem
Dec 13 14:23:36.416811 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Dec 13 14:23:36.416825 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Dec 13 14:23:36.417093 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Dec 13 14:23:36.417133 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Dec 13 14:23:36.417147 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Dec 13 14:23:36.417483 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Dec 13 14:23:36.417520 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Dec 13 14:23:36.417541 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6
Dec 13 14:23:36.417559 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Dec 13 14:23:36.417580 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6
Dec 13 14:23:36.417598 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Dec 13 14:23:40.323056 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:40Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 14:23:40.323344 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:40Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 14:23:40.323447 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:40Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 14:23:40.323781 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:40Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 14:23:40.323840 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:40Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Dec 13 14:23:40.323910 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-12-13T14:23:40Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Dec 13 14:23:40.938011 systemd[1]: Started systemd-journald.service.
Dec 13 14:23:40.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.939644 systemd[1]: Mounted dev-hugepages.mount.
Dec 13 14:23:40.940476 systemd[1]: Mounted dev-mqueue.mount.
Dec 13 14:23:40.941257 systemd[1]: Mounted media.mount.
Dec 13 14:23:40.942056 systemd[1]: Mounted sys-kernel-debug.mount.
Dec 13 14:23:40.942909 systemd[1]: Mounted sys-kernel-tracing.mount.
Dec 13 14:23:40.943813 systemd[1]: Mounted tmp.mount.
Dec 13 14:23:40.944734 systemd[1]: Finished flatcar-tmpfiles.service.
Dec 13 14:23:40.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.945910 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 14:23:40.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.946964 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 14:23:40.947102 systemd[1]: Finished modprobe@configfs.service.
Dec 13 14:23:40.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.948118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 14:23:40.948273 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 14:23:40.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.950027 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 14:23:40.950170 systemd[1]: Finished modprobe@drm.service.
Dec 13 14:23:40.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.951278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 14:23:40.951500 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 14:23:40.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.952726 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 14:23:40.952904 systemd[1]: Finished modprobe@fuse.service.
Dec 13 14:23:40.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.954109 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 14:23:40.954342 systemd[1]: Finished modprobe@loop.service.
Dec 13 14:23:40.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.984667 systemd[1]: Finished systemd-modules-load.service.
Dec 13 14:23:40.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.985928 systemd[1]: Finished systemd-network-generator.service.
Dec 13 14:23:40.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.987171 systemd[1]: Finished systemd-remount-fs.service.
Dec 13 14:23:40.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.988269 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 14:23:40.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:40.989600 systemd[1]: Reached target network-pre.target.
Dec 13 14:23:40.992535 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Dec 13 14:23:40.994914 systemd[1]: Mounting sys-kernel-config.mount...
Dec 13 14:23:40.995720 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Dec 13 14:23:40.997687 systemd[1]: Starting systemd-hwdb-update.service...
Dec 13 14:23:41.000145 systemd[1]: Starting systemd-journal-flush.service...
Dec 13 14:23:41.024623 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 14:23:41.034603 systemd-journald[994]: Time spent on flushing to /var/log/journal/97d50b65e54740f5ba92564834ed0ecc is 27.702ms for 1184 entries.
Dec 13 14:23:41.034603 systemd-journald[994]: System Journal (/var/log/journal/97d50b65e54740f5ba92564834ed0ecc) is 8.0M, max 195.6M, 187.6M free.
Dec 13 14:23:41.151852 systemd-journald[994]: Received client request to flush runtime journal.
Dec 13 14:23:41.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:41.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:41.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:41.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:41.025867 systemd[1]: Starting systemd-random-seed.service...
Dec 13 14:23:41.026886 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 14:23:41.028089 systemd[1]: Starting systemd-sysctl.service...
Dec 13 14:23:41.030566 systemd[1]: Starting systemd-sysusers.service...
Dec 13 14:23:41.033638 systemd[1]: Starting systemd-udev-settle.service...
Dec 13 14:23:41.153172 udevadm[1008]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Dec 13 14:23:41.036423 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Dec 13 14:23:41.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:41.038067 systemd[1]: Mounted sys-kernel-config.mount.
Dec 13 14:23:41.072359 systemd[1]: Finished systemd-sysctl.service.
Dec 13 14:23:41.074662 systemd[1]: Finished systemd-sysusers.service.
Dec 13 14:23:41.076946 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Dec 13 14:23:41.112025 systemd[1]: Finished systemd-random-seed.service.
Dec 13 14:23:41.113929 systemd[1]: Reached target first-boot-complete.target.
Dec 13 14:23:41.131627 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Dec 13 14:23:41.153371 systemd[1]: Finished systemd-journal-flush.service.
Dec 13 14:23:41.868194 systemd[1]: Finished systemd-hwdb-update.service.
Dec 13 14:23:41.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:41.880000 audit: BPF prog-id=24 op=LOAD
Dec 13 14:23:41.880000 audit: BPF prog-id=25 op=LOAD
Dec 13 14:23:41.880000 audit: BPF prog-id=7 op=UNLOAD
Dec 13 14:23:41.880000 audit: BPF prog-id=8 op=UNLOAD
Dec 13 14:23:41.882330 systemd[1]: Starting systemd-udevd.service...
Dec 13 14:23:41.900001 systemd-udevd[1013]: Using default interface naming scheme 'v252'.
Dec 13 14:23:41.915410 systemd[1]: Started systemd-udevd.service.
Dec 13 14:23:41.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:41.930000 audit: BPF prog-id=26 op=LOAD
Dec 13 14:23:41.932690 systemd[1]: Starting systemd-networkd.service...
Dec 13 14:23:41.937000 audit: BPF prog-id=27 op=LOAD
Dec 13 14:23:41.937000 audit: BPF prog-id=28 op=LOAD
Dec 13 14:23:41.937000 audit: BPF prog-id=29 op=LOAD
Dec 13 14:23:41.939248 systemd[1]: Starting systemd-userdbd.service...
Dec 13 14:23:41.947744 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Dec 13 14:23:41.975931 systemd[1]: Started systemd-userdbd.service.
Dec 13 14:23:41.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:41.982143 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 14:23:42.001000 audit[1023]: AVC avc:  denied  { confidentiality } for  pid=1023 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Dec 13 14:23:42.001000 audit[1023]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55dae136d910 a1=337fc a2=7f43f2465bc5 a3=5 items=110 ppid=1013 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:23:42.001000 audit: CWD cwd="/"
Dec 13 14:23:42.001000 audit: PATH item=0 name=(null) inode=1044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=1 name=(null) inode=14672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=2 name=(null) inode=14672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=3 name=(null) inode=14673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=4 name=(null) inode=14672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=5 name=(null) inode=14674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=6 name=(null) inode=14672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=7 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=8 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=9 name=(null) inode=14676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=10 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=11 name=(null) inode=14677 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=12 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=13 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=14 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=15 name=(null) inode=14679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=16 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=17 name=(null) inode=14680 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=18 name=(null) inode=14672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=19 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=20 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=21 name=(null) inode=14682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=22 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=23 name=(null) inode=14683 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=24 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=25 name=(null) inode=14684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=26 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=27 name=(null) inode=14685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=28 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=29 name=(null) inode=14686 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=30 name=(null) inode=14672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=31 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=32 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=33 name=(null) inode=14688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=34 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=35 name=(null) inode=14689 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=36 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=37 name=(null) inode=14690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=38 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=39 name=(null) inode=14691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=40 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=41 name=(null) inode=14692 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=42 name=(null) inode=14672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=43 name=(null) inode=14693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=44 name=(null) inode=14693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=45 name=(null) inode=14694 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=46 name=(null) inode=14693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=47 name=(null) inode=14695 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=48 name=(null) inode=14693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=49 name=(null) inode=14696 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=50 name=(null) inode=14693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=51 name=(null) inode=14697 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=52 name=(null) inode=14693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=53 name=(null) inode=14698 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=54 name=(null) inode=1044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=55 name=(null) inode=14699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=56 name=(null) inode=14699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=57 name=(null) inode=14700 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=58 name=(null) inode=14699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=59 name=(null) inode=14701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=60 name=(null) inode=14699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=61 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=62 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=63 name=(null) inode=14703 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=64 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=65 name=(null) inode=14704 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=66 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=67 name=(null) inode=14705 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=68 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=69 name=(null) inode=14706 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=70 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=71 name=(null) inode=14707 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=72 name=(null) inode=14699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=73 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=74 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=75 name=(null) inode=14709 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=76 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=77 name=(null) inode=14710 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=78 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=79 name=(null) inode=14711 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=80 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=81 name=(null) inode=14712 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=82 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=83 name=(null) inode=14713 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=84 name=(null) inode=14699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=85 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=86 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=87 name=(null) inode=14715 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=88 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=89 name=(null) inode=14716 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=90 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=91 name=(null) inode=14717 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=92 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=93 name=(null) inode=14718 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=94 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=95 name=(null) inode=14719 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=96 name=(null) inode=14699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=97 name=(null) inode=14720 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=98 name=(null) inode=14720 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=99 name=(null) inode=14721 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=100 name=(null) inode=14720 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=101 name=(null) inode=14722 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=102 name=(null) inode=14720 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=103 name=(null) inode=14723 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=104 name=(null) inode=14720 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=105 name=(null) inode=14724 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=106 name=(null) inode=14720 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=107 name=(null) inode=14725 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PATH item=109 name=(null) inode=14726 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:23:42.001000 audit: PROCTITLE proctitle="(udev-worker)"
Dec 13 14:23:42.013720 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device
Dec 13 14:23:42.016258 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Dec 13 14:23:42.016467 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 13 14:23:42.016679 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 13 14:23:42.042017 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
Dec 13 14:23:42.048999 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2
Dec 13 14:23:42.084021 kernel: ACPI: button: Power Button [PWRF]
Dec 13 14:23:42.090912 systemd-networkd[1032]: lo: Link UP
Dec 13 14:23:42.090940 systemd-networkd[1032]: lo: Gained carrier
Dec 13 14:23:42.091558 systemd-networkd[1032]: Enumeration completed
Dec 13 14:23:42.091698 systemd-networkd[1032]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 14:23:42.093377 systemd-networkd[1032]: eth0: Link UP
Dec 13 14:23:42.093388 systemd-networkd[1032]: eth0: Gained carrier
Dec 13 14:23:42.093615 systemd[1]: Started systemd-networkd.service.
Dec 13 14:23:42.094996 kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 14:23:42.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.141312 systemd-networkd[1032]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1
Dec 13 14:23:42.161311 kernel: kvm: Nested Virtualization enabled
Dec 13 14:23:42.161449 kernel: SVM: kvm: Nested Paging enabled
Dec 13 14:23:42.161473 kernel: SVM: Virtual VMLOAD VMSAVE supported
Dec 13 14:23:42.162014 kernel: SVM: Virtual GIF supported
Dec 13 14:23:42.183012 kernel: EDAC MC: Ver: 3.0.0
Dec 13 14:23:42.210508 systemd[1]: Finished systemd-udev-settle.service.
Dec 13 14:23:42.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.226443 systemd[1]: Starting lvm2-activation-early.service...
Dec 13 14:23:42.236247 lvm[1049]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 14:23:42.328602 systemd[1]: Finished lvm2-activation-early.service.
Dec 13 14:23:42.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.331336 systemd[1]: Reached target cryptsetup.target.
Dec 13 14:23:42.333363 systemd[1]: Starting lvm2-activation.service...
Dec 13 14:23:42.336803 lvm[1050]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 14:23:42.364496 systemd[1]: Finished lvm2-activation.service.
Dec 13 14:23:42.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.365701 systemd[1]: Reached target local-fs-pre.target.
Dec 13 14:23:42.366759 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 14:23:42.366810 systemd[1]: Reached target local-fs.target.
Dec 13 14:23:42.367694 systemd[1]: Reached target machines.target.
Dec 13 14:23:42.369808 systemd[1]: Starting ldconfig.service...
Dec 13 14:23:42.370957 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 14:23:42.371031 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:23:42.371937 systemd[1]: Starting systemd-boot-update.service...
Dec 13 14:23:42.397390 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Dec 13 14:23:42.399446 systemd[1]: Starting systemd-machine-id-commit.service...
Dec 13 14:23:42.401249 systemd[1]: Starting systemd-sysext.service...
Dec 13 14:23:42.402501 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1052 (bootctl)
Dec 13 14:23:42.403564 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Dec 13 14:23:42.405550 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Dec 13 14:23:42.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.416858 systemd[1]: Unmounting usr-share-oem.mount...
Dec 13 14:23:42.421132 systemd[1]: usr-share-oem.mount: Deactivated successfully.
Dec 13 14:23:42.421275 systemd[1]: Unmounted usr-share-oem.mount.
Dec 13 14:23:42.433019 kernel: loop0: detected capacity change from 0 to 205544
Dec 13 14:23:42.444771 systemd-fsck[1059]: fsck.fat 4.2 (2021-01-31)
Dec 13 14:23:42.444771 systemd-fsck[1059]: /dev/vda1: 790 files, 119311/258078 clusters
Dec 13 14:23:42.446304 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Dec 13 14:23:42.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.449188 systemd[1]: Mounting boot.mount...
Dec 13 14:23:42.785100 systemd[1]: Mounted boot.mount.
Dec 13 14:23:42.794988 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Dec 13 14:23:42.798621 systemd[1]: Finished systemd-boot-update.service.
Dec 13 14:23:42.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.849015 kernel: loop1: detected capacity change from 0 to 205544
Dec 13 14:23:42.854792 (sd-sysext)[1065]: Using extensions 'kubernetes'.
Dec 13 14:23:42.855173 (sd-sysext)[1065]: Merged extensions into '/usr'.
Dec 13 14:23:42.893914 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:23:42.895532 systemd[1]: Mounting usr-share-oem.mount...
Dec 13 14:23:42.896786 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 14:23:42.898692 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 14:23:42.901314 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 14:23:42.913591 systemd[1]: Starting modprobe@loop.service...
Dec 13 14:23:42.914425 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 14:23:42.914561 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:23:42.914681 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:23:42.917620 systemd[1]: Mounted usr-share-oem.mount.
Dec 13 14:23:42.920257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 14:23:42.920422 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 14:23:42.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.921773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 14:23:42.921901 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 14:23:42.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.923251 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 14:23:42.923371 systemd[1]: Finished modprobe@loop.service.
Dec 13 14:23:42.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.924661 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 14:23:42.924779 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 14:23:42.925915 systemd[1]: Finished systemd-sysext.service.
Dec 13 14:23:42.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:42.928057 systemd[1]: Starting ensure-sysext.service...
Dec 13 14:23:42.930326 systemd[1]: Starting systemd-tmpfiles-setup.service...
Dec 13 14:23:42.937965 systemd[1]: Reloading.
Dec 13 14:23:42.944016 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Dec 13 14:23:42.944848 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Dec 13 14:23:42.955468 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Dec 13 14:23:42.970989 ldconfig[1051]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Dec 13 14:23:43.045327 /usr/lib/systemd/system-generators/torcx-generator[1092]: time="2024-12-13T14:23:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 14:23:43.045694 /usr/lib/systemd/system-generators/torcx-generator[1092]: time="2024-12-13T14:23:43Z" level=info msg="torcx already run"
Dec 13 14:23:43.119615 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 14:23:43.119629 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 14:23:43.144794 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 14:23:43.198742 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Dec 13 14:23:43.201000 audit: BPF prog-id=30 op=LOAD
Dec 13 14:23:43.201000 audit: BPF prog-id=21 op=UNLOAD
Dec 13 14:23:43.201000 audit: BPF prog-id=31 op=LOAD
Dec 13 14:23:43.201000 audit: BPF prog-id=32 op=LOAD
Dec 13 14:23:43.201000 audit: BPF prog-id=22 op=UNLOAD
Dec 13 14:23:43.201000 audit: BPF prog-id=23 op=UNLOAD
Dec 13 14:23:43.202000 audit: BPF prog-id=33 op=LOAD
Dec 13 14:23:43.202000 audit: BPF prog-id=27 op=UNLOAD
Dec 13 14:23:43.202000 audit: BPF prog-id=34 op=LOAD
Dec 13 14:23:43.202000 audit: BPF prog-id=35 op=LOAD
Dec 13 14:23:43.202000 audit: BPF prog-id=28 op=UNLOAD
Dec 13 14:23:43.202000 audit: BPF prog-id=29 op=UNLOAD
Dec 13 14:23:43.202000 audit: BPF prog-id=36 op=LOAD
Dec 13 14:23:43.202000 audit: BPF prog-id=26 op=UNLOAD
Dec 13 14:23:43.204000 audit: BPF prog-id=37 op=LOAD
Dec 13 14:23:43.204000 audit: BPF prog-id=38 op=LOAD
Dec 13 14:23:43.204000 audit: BPF prog-id=24 op=UNLOAD
Dec 13 14:23:43.204000 audit: BPF prog-id=25 op=UNLOAD
Dec 13 14:23:43.207218 systemd[1]: Finished ldconfig.service.
Dec 13 14:23:43.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.208404 systemd[1]: Finished systemd-machine-id-commit.service.
Dec 13 14:23:43.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.210645 systemd[1]: Finished systemd-tmpfiles-setup.service.
Dec 13 14:23:43.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.214417 systemd[1]: Starting audit-rules.service...
Dec 13 14:23:43.216219 systemd[1]: Starting clean-ca-certificates.service...
Dec 13 14:23:43.219000 audit: BPF prog-id=39 op=LOAD
Dec 13 14:23:43.221000 audit: BPF prog-id=40 op=LOAD
Dec 13 14:23:43.218240 systemd[1]: Starting systemd-journal-catalog-update.service...
Dec 13 14:23:43.220746 systemd[1]: Starting systemd-resolved.service...
Dec 13 14:23:43.222902 systemd[1]: Starting systemd-timesyncd.service...
Dec 13 14:23:43.224538 systemd[1]: Starting systemd-update-utmp.service...
Dec 13 14:23:43.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.226021 systemd[1]: Finished clean-ca-certificates.service.
Dec 13 14:23:43.228000 audit[1142]: SYSTEM_BOOT pid=1142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.234190 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 14:23:43.235618 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 14:23:43.237843 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 14:23:43.239926 systemd[1]: Starting modprobe@loop.service...
Dec 13 14:23:43.240861 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 14:23:43.241058 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:23:43.241202 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 14:23:43.242486 systemd[1]: Finished systemd-journal-catalog-update.service.
Dec 13 14:23:43.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.244292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 14:23:43.244408 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 14:23:43.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.246289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 14:23:43.246403 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 14:23:43.248171 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 14:23:43.248285 systemd[1]: Finished modprobe@loop.service.
Dec 13 14:23:43.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.250859 systemd[1]: Finished systemd-update-utmp.service.
Dec 13 14:23:43.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.253913 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 14:23:43.254212 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 14:23:43.256440 systemd[1]: Starting systemd-update-done.service...
Dec 13 14:23:43.259710 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 14:23:43.261474 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 14:23:43.264058 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 14:23:43.266260 systemd[1]: Starting modprobe@loop.service...
Dec 13 14:23:43.267364 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 14:23:43.267526 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:23:43.267656 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 14:23:43.268863 systemd[1]: Finished systemd-update-done.service.
Dec 13 14:23:43.271545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 14:23:43.271722 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 14:23:43.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.273621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 14:23:43.273785 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 14:23:43.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.275463 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 14:23:43.275614 systemd[1]: Finished modprobe@loop.service.
Dec 13 14:23:43.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:23:43.277311 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 14:23:43.277459 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 14:23:43.281825 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 14:23:43.282240 augenrules[1161]: No rules
Dec 13 14:23:43.281000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Dec 13 14:23:43.281000 audit[1161]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe963c0450 a2=420 a3=0 items=0 ppid=1133 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:23:43.281000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Dec 13 14:23:43.283792 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 14:23:43.286435 systemd[1]: Starting modprobe@drm.service...
Dec 13 14:23:43.289227 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 14:23:43.292199 systemd[1]: Starting modprobe@loop.service...
Dec 13 14:23:43.293286 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 14:23:43.293437 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:23:43.295081 systemd[1]: Starting systemd-networkd-wait-online.service...
Dec 13 14:23:43.296300 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 14:23:43.297376 systemd[1]: Finished audit-rules.service.
Dec 13 14:23:43.297506 systemd-resolved[1138]: Positive Trust Anchors:
Dec 13 14:23:43.297525 systemd-resolved[1138]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 14:23:43.297563 systemd-resolved[1138]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 14:23:43.298788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 14:23:43.298920 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 14:23:43.300536 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 14:23:43.300644 systemd[1]: Finished modprobe@drm.service.
Dec 13 14:23:43.301953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 14:23:43.302077 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 14:23:43.303506 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 14:23:43.303610 systemd[1]: Finished modprobe@loop.service.
Dec 13 14:23:43.794224 systemd-timesyncd[1141]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Dec 13 14:23:43.794272 systemd-timesyncd[1141]: Initial clock synchronization to Fri 2024-12-13 14:23:43.794144 UTC.
Dec 13 14:23:43.794666 systemd[1]: Started systemd-timesyncd.service.
Dec 13 14:23:43.797115 systemd[1]: Reached target time-set.target.
Dec 13 14:23:43.798335 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 14:23:43.798377 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 14:23:43.798765 systemd[1]: Finished ensure-sysext.service.
Dec 13 14:23:43.799051 systemd-resolved[1138]: Defaulting to hostname 'linux'.
Dec 13 14:23:43.802565 systemd[1]: Started systemd-resolved.service.
Dec 13 14:23:43.803728 systemd[1]: Reached target network.target.
Dec 13 14:23:43.804779 systemd[1]: Reached target nss-lookup.target.
Dec 13 14:23:43.805908 systemd[1]: Reached target sysinit.target.
Dec 13 14:23:43.806901 systemd[1]: Started motdgen.path.
Dec 13 14:23:43.807769 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Dec 13 14:23:43.809203 systemd[1]: Started logrotate.timer.
Dec 13 14:23:43.810124 systemd[1]: Started mdadm.timer.
Dec 13 14:23:43.810971 systemd[1]: Started systemd-tmpfiles-clean.timer.
Dec 13 14:23:43.812201 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Dec 13 14:23:43.812247 systemd[1]: Reached target paths.target.
Dec 13 14:23:43.813180 systemd[1]: Reached target timers.target.
Dec 13 14:23:43.814632 systemd[1]: Listening on dbus.socket.
Dec 13 14:23:43.816632 systemd[1]: Starting docker.socket...
Dec 13 14:23:43.820915 systemd[1]: Listening on sshd.socket.
Dec 13 14:23:43.822142 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:23:43.822652 systemd[1]: Listening on docker.socket.
Dec 13 14:23:43.823746 systemd[1]: Reached target sockets.target.
Dec 13 14:23:43.824978 systemd[1]: Reached target basic.target.
Dec 13 14:23:43.826048 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 14:23:43.826094 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 14:23:43.827551 systemd[1]: Starting containerd.service...
Dec 13 14:23:43.829897 systemd[1]: Starting dbus.service...
Dec 13 14:23:43.862667 systemd[1]: Starting enable-oem-cloudinit.service...
Dec 13 14:23:43.864883 systemd[1]: Starting extend-filesystems.service...
Dec 13 14:23:43.866123 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Dec 13 14:23:43.867337 jq[1176]: false
Dec 13 14:23:43.867452 systemd[1]: Starting motdgen.service...
Dec 13 14:23:43.869204 systemd-networkd[1032]: eth0: Gained IPv6LL
Dec 13 14:23:43.869589 systemd[1]: Starting prepare-helm.service...
Dec 13 14:23:43.871924 systemd[1]: Starting ssh-key-proc-cmdline.service...
Dec 13 14:23:43.874504 systemd[1]: Starting sshd-keygen.service...
Dec 13 14:23:43.878623 systemd[1]: Starting systemd-logind.service...
Dec 13 14:23:43.881171 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:23:43.881260 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Dec 13 14:23:43.881937 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Dec 13 14:23:43.884332 systemd[1]: Starting update-engine.service...
Dec 13 14:23:43.886775 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Dec 13 14:23:43.888922 systemd[1]: Finished systemd-networkd-wait-online.service.
Dec 13 14:23:43.890762 jq[1193]: true
Dec 13 14:23:43.892612 extend-filesystems[1177]: Found loop1
Dec 13 14:23:43.892612 extend-filesystems[1177]: Found sr0
Dec 13 14:23:43.892612 extend-filesystems[1177]: Found vda
Dec 13 14:23:43.892517 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Dec 13 14:23:43.941737 extend-filesystems[1177]: Found vda1
Dec 13 14:23:43.941737 extend-filesystems[1177]: Found vda2
Dec 13 14:23:43.941737 extend-filesystems[1177]: Found vda3
Dec 13 14:23:43.941737 extend-filesystems[1177]: Found usr
Dec 13 14:23:43.941737 extend-filesystems[1177]: Found vda4
Dec 13 14:23:43.941737 extend-filesystems[1177]: Found vda6
Dec 13 14:23:43.941737 extend-filesystems[1177]: Found vda7
Dec 13 14:23:43.941737 extend-filesystems[1177]: Found vda9
Dec 13 14:23:43.941737 extend-filesystems[1177]: Checking size of /dev/vda9
Dec 13 14:23:43.896097 dbus-daemon[1175]: [system] SELinux support is enabled
Dec 13 14:23:43.892829 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Dec 13 14:23:43.958549 extend-filesystems[1177]: Resized partition /dev/vda9
Dec 13 14:23:43.948009 systemd[1]: Started dbus.service.
Dec 13 14:23:43.959604 extend-filesystems[1201]: resize2fs 1.46.5 (30-Dec-2021)
Dec 13 14:23:43.952792 systemd[1]: motdgen.service: Deactivated successfully.
Dec 13 14:23:43.952951 systemd[1]: Finished motdgen.service.
Dec 13 14:23:43.961759 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Dec 13 14:23:43.961985 systemd[1]: Finished ssh-key-proc-cmdline.service.
Dec 13 14:23:43.998377 systemd[1]: Reached target network-online.target.
Dec 13 14:23:43.999528 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:23:44.025240 systemd[1]: Starting kubelet.service...
Dec 13 14:23:44.027296 jq[1204]: true
Dec 13 14:23:44.037216 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Dec 13 14:23:44.037284 systemd[1]: Reached target system-config.target.
Dec 13 14:23:44.038711 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Dec 13 14:23:44.038737 systemd[1]: Reached target user-config.target.
Dec 13 14:23:44.039949 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:23:44.088580 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Dec 13 14:23:44.124036 env[1205]: time="2024-12-13T14:23:44.123942731Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Dec 13 14:23:44.145981 env[1205]: time="2024-12-13T14:23:44.145536808Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Dec 13 14:23:44.145981 env[1205]: time="2024-12-13T14:23:44.145796024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Dec 13 14:23:44.147383 env[1205]: time="2024-12-13T14:23:44.147327666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Dec 13 14:23:44.147383 env[1205]: time="2024-12-13T14:23:44.147358775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Dec 13 14:23:44.147624 env[1205]: time="2024-12-13T14:23:44.147579028Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 14:23:44.147624 env[1205]: time="2024-12-13T14:23:44.147594807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Dec 13 14:23:44.147624 env[1205]: time="2024-12-13T14:23:44.147608533Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Dec 13 14:23:44.147624 env[1205]: time="2024-12-13T14:23:44.147617099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Dec 13 14:23:44.147720 env[1205]: time="2024-12-13T14:23:44.147705655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Dec 13 14:23:44.148195 env[1205]: time="2024-12-13T14:23:44.148151692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Dec 13 14:23:44.148644 env[1205]: time="2024-12-13T14:23:44.148601816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 14:23:44.148644 env[1205]: time="2024-12-13T14:23:44.148627123Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Dec 13 14:23:44.148720 env[1205]: time="2024-12-13T14:23:44.148690923Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Dec 13 14:23:44.148720 env[1205]: time="2024-12-13T14:23:44.148707534Z" level=info msg="metadata content store policy set" policy=shared
Dec 13 14:23:44.197113 tar[1203]: linux-amd64/helm
Dec 13 14:23:44.202805 systemd-logind[1187]: Watching system buttons on /dev/input/event1 (Power Button)
Dec 13 14:23:44.202836 systemd-logind[1187]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Dec 13 14:23:44.209283 update_engine[1190]: I1213 14:23:44.205924  1190 main.cc:92] Flatcar Update Engine starting
Dec 13 14:23:44.203909 systemd-logind[1187]: New seat seat0.
Dec 13 14:23:44.208450 systemd[1]: Started systemd-logind.service.
Dec 13 14:23:44.257914 systemd[1]: Started update-engine.service.
Dec 13 14:23:44.258480 update_engine[1190]: I1213 14:23:44.258430  1190 update_check_scheduler.cc:74] Next update check in 5m49s
Dec 13 14:23:44.261385 systemd[1]: Started locksmithd.service.
Dec 13 14:23:44.294605 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Dec 13 14:23:44.295790 extend-filesystems[1201]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Dec 13 14:23:44.295790 extend-filesystems[1201]: old_desc_blocks = 1, new_desc_blocks = 1
Dec 13 14:23:44.295790 extend-filesystems[1201]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Dec 13 14:23:44.311119 extend-filesystems[1177]: Resized filesystem in /dev/vda9
Dec 13 14:23:44.312614 systemd[1]: extend-filesystems.service: Deactivated successfully.
Dec 13 14:23:44.312759 systemd[1]: Finished extend-filesystems.service.
Dec 13 14:23:44.317776 bash[1223]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317626057Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317711737Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317731424Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317818518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317843555Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317861418Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317876687Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317888689Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317906012Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317918886Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317933714Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Dec 13 14:23:44.317967 env[1205]: time="2024-12-13T14:23:44.317947930Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Dec 13 14:23:44.318397 env[1205]: time="2024-12-13T14:23:44.318105826Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Dec 13 14:23:44.318397 env[1205]: time="2024-12-13T14:23:44.318204151Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318560279Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318598911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318611134Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318691655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318715019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318732702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318752960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318770934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318789789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318815808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318827039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318841606Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318984424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.318998991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320444 env[1205]: time="2024-12-13T14:23:44.319011956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.318682 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Dec 13 14:23:44.320827 env[1205]: time="2024-12-13T14:23:44.319034197Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Dec 13 14:23:44.320827 env[1205]: time="2024-12-13T14:23:44.319065616Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Dec 13 14:23:44.320827 env[1205]: time="2024-12-13T14:23:44.319077539Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Dec 13 14:23:44.320827 env[1205]: time="2024-12-13T14:23:44.319097356Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Dec 13 14:23:44.320827 env[1205]: time="2024-12-13T14:23:44.319142140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Dec 13 14:23:44.320965 env[1205]: time="2024-12-13T14:23:44.319396196Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Dec 13 14:23:44.320965 env[1205]: time="2024-12-13T14:23:44.319469734Z" level=info msg="Connect containerd service"
Dec 13 14:23:44.320965 env[1205]: time="2024-12-13T14:23:44.319519087Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 13 14:23:44.320965 env[1205]: time="2024-12-13T14:23:44.320201046Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 14:23:44.322217 env[1205]: time="2024-12-13T14:23:44.321019090Z" level=info msg="Start subscribing containerd event"
Dec 13 14:23:44.325141 env[1205]: time="2024-12-13T14:23:44.323256876Z" level=info msg="Start recovering state"
Dec 13 14:23:44.325141 env[1205]: time="2024-12-13T14:23:44.323363817Z" level=info msg="Start event monitor"
Dec 13 14:23:44.325141 env[1205]: time="2024-12-13T14:23:44.323380308Z" level=info msg="Start snapshots syncer"
Dec 13 14:23:44.325141 env[1205]: time="2024-12-13T14:23:44.323433949Z" level=info msg="Start cni network conf syncer for default"
Dec 13 14:23:44.325141 env[1205]: time="2024-12-13T14:23:44.323449017Z" level=info msg="Start streaming server"
Dec 13 14:23:44.325141 env[1205]: time="2024-12-13T14:23:44.323735414Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 14:23:44.325141 env[1205]: time="2024-12-13T14:23:44.323780248Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 14:23:44.325141 env[1205]: time="2024-12-13T14:23:44.323871569Z" level=info msg="containerd successfully booted in 0.201427s"
Dec 13 14:23:44.323917 systemd[1]: Started containerd.service.
Dec 13 14:23:44.426033 locksmithd[1233]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Dec 13 14:23:44.724982 tar[1203]: linux-amd64/LICENSE
Dec 13 14:23:44.725143 tar[1203]: linux-amd64/README.md
Dec 13 14:23:44.729827 systemd[1]: Finished prepare-helm.service.
Dec 13 14:23:44.908598 sshd_keygen[1198]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Dec 13 14:23:44.934560 systemd[1]: Finished sshd-keygen.service.
Dec 13 14:23:44.938621 systemd[1]: Starting issuegen.service...
Dec 13 14:23:44.949167 systemd[1]: issuegen.service: Deactivated successfully.
Dec 13 14:23:44.949415 systemd[1]: Finished issuegen.service.
Dec 13 14:23:44.952808 systemd[1]: Starting systemd-user-sessions.service...
Dec 13 14:23:44.968122 systemd[1]: Finished systemd-user-sessions.service.
Dec 13 14:23:44.975736 systemd[1]: Started getty@tty1.service.
Dec 13 14:23:44.979139 systemd[1]: Started serial-getty@ttyS0.service.
Dec 13 14:23:44.980597 systemd[1]: Reached target getty.target.
Dec 13 14:23:45.126827 systemd[1]: Started kubelet.service.
Dec 13 14:23:45.128237 systemd[1]: Reached target multi-user.target.
Dec 13 14:23:45.130607 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Dec 13 14:23:45.138222 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 13 14:23:45.138362 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Dec 13 14:23:45.139590 systemd[1]: Startup finished in 1.061s (kernel) + 6.014s (initrd) + 8.790s (userspace) = 15.866s.
Dec 13 14:23:46.027372 kubelet[1258]: E1213 14:23:46.027298    1258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 14:23:46.029007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 14:23:46.029165 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 14:23:46.029420 systemd[1]: kubelet.service: Consumed 1.576s CPU time.
Dec 13 14:23:53.678126 systemd[1]: Created slice system-sshd.slice.
Dec 13 14:23:53.679276 systemd[1]: Started sshd@0-10.0.0.77:22-10.0.0.1:52516.service.
Dec 13 14:23:53.726933 sshd[1267]: Accepted publickey for core from 10.0.0.1 port 52516 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:23:53.728642 sshd[1267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:23:53.738285 systemd[1]: Created slice user-500.slice.
Dec 13 14:23:53.739539 systemd[1]: Starting user-runtime-dir@500.service...
Dec 13 14:23:53.741416 systemd-logind[1187]: New session 1 of user core.
Dec 13 14:23:53.747997 systemd[1]: Finished user-runtime-dir@500.service.
Dec 13 14:23:53.749575 systemd[1]: Starting user@500.service...
Dec 13 14:23:53.752521 (systemd)[1270]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:23:53.826754 systemd[1270]: Queued start job for default target default.target.
Dec 13 14:23:53.827231 systemd[1270]: Reached target paths.target.
Dec 13 14:23:53.827251 systemd[1270]: Reached target sockets.target.
Dec 13 14:23:53.827262 systemd[1270]: Reached target timers.target.
Dec 13 14:23:53.827272 systemd[1270]: Reached target basic.target.
Dec 13 14:23:53.827305 systemd[1270]: Reached target default.target.
Dec 13 14:23:53.827326 systemd[1270]: Startup finished in 68ms.
Dec 13 14:23:53.827524 systemd[1]: Started user@500.service.
Dec 13 14:23:53.828926 systemd[1]: Started session-1.scope.
Dec 13 14:23:53.880979 systemd[1]: Started sshd@1-10.0.0.77:22-10.0.0.1:52532.service.
Dec 13 14:23:53.929044 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 52532 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:23:53.930382 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:23:53.934337 systemd-logind[1187]: New session 2 of user core.
Dec 13 14:23:53.935575 systemd[1]: Started session-2.scope.
Dec 13 14:23:53.987979 sshd[1279]: pam_unix(sshd:session): session closed for user core
Dec 13 14:23:53.990663 systemd[1]: sshd@1-10.0.0.77:22-10.0.0.1:52532.service: Deactivated successfully.
Dec 13 14:23:53.991253 systemd[1]: session-2.scope: Deactivated successfully.
Dec 13 14:23:53.991780 systemd-logind[1187]: Session 2 logged out. Waiting for processes to exit.
Dec 13 14:23:53.992632 systemd[1]: Started sshd@2-10.0.0.77:22-10.0.0.1:52544.service.
Dec 13 14:23:53.993435 systemd-logind[1187]: Removed session 2.
Dec 13 14:23:54.033238 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 52544 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:23:54.035260 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:23:54.040117 systemd-logind[1187]: New session 3 of user core.
Dec 13 14:23:54.040826 systemd[1]: Started session-3.scope.
Dec 13 14:23:54.090967 sshd[1285]: pam_unix(sshd:session): session closed for user core
Dec 13 14:23:54.093760 systemd[1]: sshd@2-10.0.0.77:22-10.0.0.1:52544.service: Deactivated successfully.
Dec 13 14:23:54.094291 systemd[1]: session-3.scope: Deactivated successfully.
Dec 13 14:23:54.094874 systemd-logind[1187]: Session 3 logged out. Waiting for processes to exit.
Dec 13 14:23:54.096170 systemd[1]: Started sshd@3-10.0.0.77:22-10.0.0.1:52550.service.
Dec 13 14:23:54.096872 systemd-logind[1187]: Removed session 3.
Dec 13 14:23:54.140342 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 52550 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:23:54.141749 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:23:54.146184 systemd-logind[1187]: New session 4 of user core.
Dec 13 14:23:54.147183 systemd[1]: Started session-4.scope.
Dec 13 14:23:54.204982 sshd[1292]: pam_unix(sshd:session): session closed for user core
Dec 13 14:23:54.208090 systemd[1]: sshd@3-10.0.0.77:22-10.0.0.1:52550.service: Deactivated successfully.
Dec 13 14:23:54.208699 systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 14:23:54.209292 systemd-logind[1187]: Session 4 logged out. Waiting for processes to exit.
Dec 13 14:23:54.210288 systemd[1]: Started sshd@4-10.0.0.77:22-10.0.0.1:52552.service.
Dec 13 14:23:54.211158 systemd-logind[1187]: Removed session 4.
Dec 13 14:23:54.252240 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 52552 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:23:54.253547 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:23:54.257512 systemd-logind[1187]: New session 5 of user core.
Dec 13 14:23:54.258287 systemd[1]: Started session-5.scope.
Dec 13 14:23:54.315952 sudo[1301]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Dec 13 14:23:54.316176 sudo[1301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Dec 13 14:23:54.371976 systemd[1]: Starting docker.service...
Dec 13 14:23:54.498375 env[1313]: time="2024-12-13T14:23:54.498165512Z" level=info msg="Starting up"
Dec 13 14:23:54.499930 env[1313]: time="2024-12-13T14:23:54.499875489Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 13 14:23:54.499930 env[1313]: time="2024-12-13T14:23:54.499910314Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 13 14:23:54.499930 env[1313]: time="2024-12-13T14:23:54.499940401Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Dec 13 14:23:54.500174 env[1313]: time="2024-12-13T14:23:54.499952273Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 13 14:23:54.502643 env[1313]: time="2024-12-13T14:23:54.502605629Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 13 14:23:54.502643 env[1313]: time="2024-12-13T14:23:54.502626548Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 13 14:23:54.502715 env[1313]: time="2024-12-13T14:23:54.502643780Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Dec 13 14:23:54.502715 env[1313]: time="2024-12-13T14:23:54.502654631Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 13 14:23:54.659857 env[1313]: time="2024-12-13T14:23:54.659789515Z" level=info msg="Loading containers: start."
Dec 13 14:23:54.814094 kernel: Initializing XFRM netlink socket
Dec 13 14:23:54.846006 env[1313]: time="2024-12-13T14:23:54.845921639Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Dec 13 14:23:54.901187 systemd-networkd[1032]: docker0: Link UP
Dec 13 14:23:55.140093 env[1313]: time="2024-12-13T14:23:55.139947152Z" level=info msg="Loading containers: done."
Dec 13 14:23:55.287303 env[1313]: time="2024-12-13T14:23:55.287241301Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Dec 13 14:23:55.287562 env[1313]: time="2024-12-13T14:23:55.287505015Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23
Dec 13 14:23:55.287676 env[1313]: time="2024-12-13T14:23:55.287658824Z" level=info msg="Daemon has completed initialization"
Dec 13 14:23:55.797566 systemd[1]: Started docker.service.
Dec 13 14:23:55.801906 env[1313]: time="2024-12-13T14:23:55.801833876Z" level=info msg="API listen on /run/docker.sock"
Dec 13 14:23:56.280045 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Dec 13 14:23:56.280307 systemd[1]: Stopped kubelet.service.
Dec 13 14:23:56.280355 systemd[1]: kubelet.service: Consumed 1.576s CPU time.
Dec 13 14:23:56.282226 systemd[1]: Starting kubelet.service...
Dec 13 14:23:56.412595 systemd[1]: Started kubelet.service.
Dec 13 14:23:56.545237 kubelet[1443]: E1213 14:23:56.544891    1443 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 14:23:56.547849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 14:23:56.547985 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 14:23:57.146760 env[1205]: time="2024-12-13T14:23:57.146691965Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\""
Dec 13 14:23:58.408245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510883425.mount: Deactivated successfully.
Dec 13 14:24:00.687558 env[1205]: time="2024-12-13T14:24:00.687503600Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:00.689630 env[1205]: time="2024-12-13T14:24:00.689591716Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:00.691753 env[1205]: time="2024-12-13T14:24:00.691715299Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:00.693610 env[1205]: time="2024-12-13T14:24:00.693579034Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:00.694545 env[1205]: time="2024-12-13T14:24:00.694482929Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\""
Dec 13 14:24:00.697487 env[1205]: time="2024-12-13T14:24:00.697455894Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\""
Dec 13 14:24:03.487262 env[1205]: time="2024-12-13T14:24:03.487186956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:03.489476 env[1205]: time="2024-12-13T14:24:03.489437707Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:03.491376 env[1205]: time="2024-12-13T14:24:03.491347318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:03.493588 env[1205]: time="2024-12-13T14:24:03.493511116Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:03.494702 env[1205]: time="2024-12-13T14:24:03.494649691Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\""
Dec 13 14:24:03.495562 env[1205]: time="2024-12-13T14:24:03.495525183Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\""
Dec 13 14:24:05.597445 env[1205]: time="2024-12-13T14:24:05.597330418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:05.601449 env[1205]: time="2024-12-13T14:24:05.601368912Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:05.603793 env[1205]: time="2024-12-13T14:24:05.603753554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:05.606033 env[1205]: time="2024-12-13T14:24:05.605975270Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:05.606838 env[1205]: time="2024-12-13T14:24:05.606790229Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\""
Dec 13 14:24:05.607491 env[1205]: time="2024-12-13T14:24:05.607450337Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\""
Dec 13 14:24:06.701842 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Dec 13 14:24:06.701985 systemd[1]: Stopped kubelet.service.
Dec 13 14:24:06.703670 systemd[1]: Starting kubelet.service...
Dec 13 14:24:06.794856 systemd[1]: Started kubelet.service.
Dec 13 14:24:07.041219 kubelet[1460]: E1213 14:24:07.041040    1460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 14:24:07.043131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 14:24:07.043299 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 14:24:07.390397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248842565.mount: Deactivated successfully.
Dec 13 14:24:09.516434 env[1205]: time="2024-12-13T14:24:09.516357473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:09.561454 env[1205]: time="2024-12-13T14:24:09.561337677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:09.584650 env[1205]: time="2024-12-13T14:24:09.584592578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:09.606349 env[1205]: time="2024-12-13T14:24:09.606285199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:09.606829 env[1205]: time="2024-12-13T14:24:09.606770900Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\""
Dec 13 14:24:09.607352 env[1205]: time="2024-12-13T14:24:09.607290394Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Dec 13 14:24:10.776367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35934161.mount: Deactivated successfully.
Dec 13 14:24:12.053261 env[1205]: time="2024-12-13T14:24:12.053146668Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:12.056142 env[1205]: time="2024-12-13T14:24:12.056091270Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:12.057844 env[1205]: time="2024-12-13T14:24:12.057801317Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:12.060347 env[1205]: time="2024-12-13T14:24:12.060298090Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:12.061433 env[1205]: time="2024-12-13T14:24:12.061395447Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Dec 13 14:24:12.061974 env[1205]: time="2024-12-13T14:24:12.061947062Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\""
Dec 13 14:24:13.458780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134058066.mount: Deactivated successfully.
Dec 13 14:24:13.465871 env[1205]: time="2024-12-13T14:24:13.465822770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:13.467958 env[1205]: time="2024-12-13T14:24:13.467912920Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:13.469494 env[1205]: time="2024-12-13T14:24:13.469449992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:13.470836 env[1205]: time="2024-12-13T14:24:13.470797970Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:13.471360 env[1205]: time="2024-12-13T14:24:13.471320711Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\""
Dec 13 14:24:13.471833 env[1205]: time="2024-12-13T14:24:13.471761658Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\""
Dec 13 14:24:14.610862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3075237226.mount: Deactivated successfully.
Dec 13 14:24:17.077154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Dec 13 14:24:17.077384 systemd[1]: Stopped kubelet.service.
Dec 13 14:24:17.078671 systemd[1]: Starting kubelet.service...
Dec 13 14:24:17.157284 systemd[1]: Started kubelet.service.
Dec 13 14:24:17.603354 kubelet[1471]: E1213 14:24:17.603288    1471 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 14:24:17.605111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 14:24:17.605249 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 14:24:18.566889 env[1205]: time="2024-12-13T14:24:18.566804310Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:18.570574 env[1205]: time="2024-12-13T14:24:18.570512337Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:18.573419 env[1205]: time="2024-12-13T14:24:18.573366708Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:18.575954 env[1205]: time="2024-12-13T14:24:18.575907839Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:18.576861 env[1205]: time="2024-12-13T14:24:18.576811100Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\""
Dec 13 14:24:20.651330 systemd[1]: Stopped kubelet.service.
Dec 13 14:24:20.653455 systemd[1]: Starting kubelet.service...
Dec 13 14:24:20.673869 systemd[1]: Reloading.
Dec 13 14:24:20.740587 /usr/lib/systemd/system-generators/torcx-generator[1525]: time="2024-12-13T14:24:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 14:24:20.741008 /usr/lib/systemd/system-generators/torcx-generator[1525]: time="2024-12-13T14:24:20Z" level=info msg="torcx already run"
Dec 13 14:24:21.090323 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 14:24:21.090346 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 14:24:21.112698 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 14:24:21.210861 systemd[1]: Started kubelet.service.
Dec 13 14:24:21.212282 systemd[1]: Stopping kubelet.service...
Dec 13 14:24:21.212592 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 14:24:21.212795 systemd[1]: Stopped kubelet.service.
Dec 13 14:24:21.214552 systemd[1]: Starting kubelet.service...
Dec 13 14:24:21.303523 systemd[1]: Started kubelet.service.
Dec 13 14:24:21.353545 kubelet[1575]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 14:24:21.353545 kubelet[1575]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 14:24:21.353545 kubelet[1575]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 14:24:21.353970 kubelet[1575]: I1213 14:24:21.353519    1575 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 14:24:21.840318 kubelet[1575]: I1213 14:24:21.840273    1575 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Dec 13 14:24:21.840318 kubelet[1575]: I1213 14:24:21.840300    1575 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 14:24:21.840584 kubelet[1575]: I1213 14:24:21.840529    1575 server.go:929] "Client rotation is on, will bootstrap in background"
Dec 13 14:24:21.944443 kubelet[1575]: E1213 14:24:21.944402    1575 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:21.944653 kubelet[1575]: I1213 14:24:21.944565    1575 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 14:24:21.953292 kubelet[1575]: E1213 14:24:21.953222    1575 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Dec 13 14:24:21.953292 kubelet[1575]: I1213 14:24:21.953272    1575 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Dec 13 14:24:21.961075 kubelet[1575]: I1213 14:24:21.961001    1575 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 14:24:21.962720 kubelet[1575]: I1213 14:24:21.962664    1575 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Dec 13 14:24:21.962969 kubelet[1575]: I1213 14:24:21.962842    1575 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 14:24:21.963140 kubelet[1575]: I1213 14:24:21.962890    1575 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Dec 13 14:24:21.963264 kubelet[1575]: I1213 14:24:21.963160    1575 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 14:24:21.963264 kubelet[1575]: I1213 14:24:21.963173    1575 container_manager_linux.go:300] "Creating device plugin manager"
Dec 13 14:24:21.963376 kubelet[1575]: I1213 14:24:21.963354    1575 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 14:24:21.965338 kubelet[1575]: I1213 14:24:21.965316    1575 kubelet.go:408] "Attempting to sync node with API server"
Dec 13 14:24:21.965403 kubelet[1575]: I1213 14:24:21.965343    1575 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 14:24:21.965428 kubelet[1575]: I1213 14:24:21.965408    1575 kubelet.go:314] "Adding apiserver pod source"
Dec 13 14:24:21.965459 kubelet[1575]: I1213 14:24:21.965441    1575 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 14:24:21.984030 kubelet[1575]: W1213 14:24:21.983969    1575 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused
Dec 13 14:24:21.984195 kubelet[1575]: E1213 14:24:21.984052    1575 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:22.001021 kubelet[1575]: I1213 14:24:22.000982    1575 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Dec 13 14:24:22.001608 kubelet[1575]: W1213 14:24:22.001563    1575 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused
Dec 13 14:24:22.001675 kubelet[1575]: E1213 14:24:22.001617    1575 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:22.003673 kubelet[1575]: I1213 14:24:22.003647    1575 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 14:24:22.004173 kubelet[1575]: W1213 14:24:22.004153    1575 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 13 14:24:22.004871 kubelet[1575]: I1213 14:24:22.004849    1575 server.go:1269] "Started kubelet"
Dec 13 14:24:22.010759 kubelet[1575]: I1213 14:24:22.010688    1575 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 14:24:22.011311 kubelet[1575]: I1213 14:24:22.011298    1575 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 14:24:22.011488 kubelet[1575]: I1213 14:24:22.011463    1575 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 14:24:22.013793 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Dec 13 14:24:22.014404 kubelet[1575]: I1213 14:24:22.014373    1575 server.go:460] "Adding debug handlers to kubelet server"
Dec 13 14:24:22.014985 kubelet[1575]: I1213 14:24:22.014963    1575 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 14:24:22.015312 kubelet[1575]: E1213 14:24:22.013668    1575 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810c2a307d99b6a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:24:22.004800362 +0000 UTC m=+0.694215071,LastTimestamp:2024-12-13 14:24:22.004800362 +0000 UTC m=+0.694215071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Dec 13 14:24:22.016239 kubelet[1575]: E1213 14:24:22.016222    1575 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 14:24:22.016336 kubelet[1575]: I1213 14:24:22.016266    1575 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Dec 13 14:24:22.017731 kubelet[1575]: I1213 14:24:22.017683    1575 volume_manager.go:289] "Starting Kubelet Volume Manager"
Dec 13 14:24:22.017873 kubelet[1575]: I1213 14:24:22.017854    1575 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Dec 13 14:24:22.017952 kubelet[1575]: I1213 14:24:22.017937    1575 reconciler.go:26] "Reconciler: start to sync state"
Dec 13 14:24:22.021032 kubelet[1575]: W1213 14:24:22.020965    1575 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused
Dec 13 14:24:22.021032 kubelet[1575]: E1213 14:24:22.021035    1575 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:22.021278 kubelet[1575]: E1213 14:24:22.021181    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:22.021864 kubelet[1575]: I1213 14:24:22.021485    1575 factory.go:221] Registration of the systemd container factory successfully
Dec 13 14:24:22.021864 kubelet[1575]: I1213 14:24:22.021585    1575 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 14:24:22.021864 kubelet[1575]: E1213 14:24:22.021388    1575 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="200ms"
Dec 13 14:24:22.022898 kubelet[1575]: I1213 14:24:22.022879    1575 factory.go:221] Registration of the containerd container factory successfully
Dec 13 14:24:22.031994 kubelet[1575]: I1213 14:24:22.031938    1575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 14:24:22.032877 kubelet[1575]: I1213 14:24:22.032858    1575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 14:24:22.032955 kubelet[1575]: I1213 14:24:22.032912    1575 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 14:24:22.032955 kubelet[1575]: I1213 14:24:22.032948    1575 kubelet.go:2321] "Starting kubelet main sync loop"
Dec 13 14:24:22.033021 kubelet[1575]: E1213 14:24:22.032998    1575 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 14:24:22.036334 kubelet[1575]: W1213 14:24:22.036295    1575 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused
Dec 13 14:24:22.036408 kubelet[1575]: E1213 14:24:22.036343    1575 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:22.036793 kubelet[1575]: I1213 14:24:22.036770    1575 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 14:24:22.036793 kubelet[1575]: I1213 14:24:22.036784    1575 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 14:24:22.036923 kubelet[1575]: I1213 14:24:22.036802    1575 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 14:24:22.122118 kubelet[1575]: E1213 14:24:22.121864    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:22.134189 kubelet[1575]: E1213 14:24:22.134129    1575 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Dec 13 14:24:22.222070 kubelet[1575]: E1213 14:24:22.221991    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:22.222475 kubelet[1575]: E1213 14:24:22.222434    1575 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="400ms"
Dec 13 14:24:22.323193 kubelet[1575]: E1213 14:24:22.323118    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:22.335170 kubelet[1575]: E1213 14:24:22.335126    1575 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Dec 13 14:24:22.423972 kubelet[1575]: E1213 14:24:22.423826    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:22.524890 kubelet[1575]: E1213 14:24:22.524813    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:22.623756 kubelet[1575]: E1213 14:24:22.623693    1575 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="800ms"
Dec 13 14:24:22.625955 kubelet[1575]: E1213 14:24:22.625929    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:22.726805 kubelet[1575]: E1213 14:24:22.726654    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:22.735869 kubelet[1575]: E1213 14:24:22.735821    1575 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Dec 13 14:24:22.827563 kubelet[1575]: E1213 14:24:22.827491    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:22.896246 kubelet[1575]: W1213 14:24:22.896166    1575 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused
Dec 13 14:24:22.896246 kubelet[1575]: E1213 14:24:22.896235    1575 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:22.928076 kubelet[1575]: E1213 14:24:22.928013    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:22.954901 kubelet[1575]: W1213 14:24:22.954836    1575 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused
Dec 13 14:24:22.954901 kubelet[1575]: E1213 14:24:22.954892    1575 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:23.028983 kubelet[1575]: E1213 14:24:23.028825    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:23.129891 kubelet[1575]: E1213 14:24:23.129809    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:23.230672 kubelet[1575]: E1213 14:24:23.230586    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:23.331458 kubelet[1575]: E1213 14:24:23.331392    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:23.424534 kubelet[1575]: W1213 14:24:23.424454    1575 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused
Dec 13 14:24:23.424902 kubelet[1575]: E1213 14:24:23.424532    1575 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="1.6s"
Dec 13 14:24:23.424902 kubelet[1575]: E1213 14:24:23.424546    1575 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:23.432327 kubelet[1575]: E1213 14:24:23.432258    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:23.460348 kubelet[1575]: I1213 14:24:23.460286    1575 policy_none.go:49] "None policy: Start"
Dec 13 14:24:23.461138 kubelet[1575]: I1213 14:24:23.461104    1575 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 14:24:23.461138 kubelet[1575]: I1213 14:24:23.461138    1575 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 14:24:23.464671 kubelet[1575]: W1213 14:24:23.464616    1575 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused
Dec 13 14:24:23.464717 kubelet[1575]: E1213 14:24:23.464687    1575 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:23.509398 systemd[1]: Created slice kubepods.slice.
Dec 13 14:24:23.513333 systemd[1]: Created slice kubepods-burstable.slice.
Dec 13 14:24:23.516206 systemd[1]: Created slice kubepods-besteffort.slice.
Dec 13 14:24:23.526883 kubelet[1575]: I1213 14:24:23.526848    1575 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 14:24:23.527125 kubelet[1575]: I1213 14:24:23.527050    1575 eviction_manager.go:189] "Eviction manager: starting control loop"
Dec 13 14:24:23.527125 kubelet[1575]: I1213 14:24:23.527084    1575 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Dec 13 14:24:23.527779 kubelet[1575]: I1213 14:24:23.527755    1575 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 14:24:23.528439 kubelet[1575]: E1213 14:24:23.528418    1575 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Dec 13 14:24:23.543375 systemd[1]: Created slice kubepods-burstable-pod30df36927e62c7a93a288b9f2b63fd9f.slice.
Dec 13 14:24:23.560541 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice.
Dec 13 14:24:23.572156 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice.
Dec 13 14:24:23.628948 kubelet[1575]: I1213 14:24:23.628388    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost"
Dec 13 14:24:23.628948 kubelet[1575]: I1213 14:24:23.628439    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30df36927e62c7a93a288b9f2b63fd9f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"30df36927e62c7a93a288b9f2b63fd9f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 14:24:23.628948 kubelet[1575]: I1213 14:24:23.628460    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30df36927e62c7a93a288b9f2b63fd9f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"30df36927e62c7a93a288b9f2b63fd9f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 14:24:23.628948 kubelet[1575]: I1213 14:24:23.628478    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 14:24:23.628948 kubelet[1575]: I1213 14:24:23.628495    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 14:24:23.629193 kubelet[1575]: I1213 14:24:23.628509    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 14:24:23.629193 kubelet[1575]: I1213 14:24:23.628522    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30df36927e62c7a93a288b9f2b63fd9f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"30df36927e62c7a93a288b9f2b63fd9f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 14:24:23.629193 kubelet[1575]: I1213 14:24:23.628535    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 14:24:23.629193 kubelet[1575]: I1213 14:24:23.628637    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 14:24:23.629386 kubelet[1575]: I1213 14:24:23.629357    1575 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Dec 13 14:24:23.629799 kubelet[1575]: E1213 14:24:23.629767    1575 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost"
Dec 13 14:24:23.832006 kubelet[1575]: I1213 14:24:23.831960    1575 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Dec 13 14:24:23.832448 kubelet[1575]: E1213 14:24:23.832409    1575 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost"
Dec 13 14:24:23.859868 kubelet[1575]: E1213 14:24:23.859803    1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:23.860649 env[1205]: time="2024-12-13T14:24:23.860593825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:30df36927e62c7a93a288b9f2b63fd9f,Namespace:kube-system,Attempt:0,}"
Dec 13 14:24:23.870846 kubelet[1575]: E1213 14:24:23.870807    1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:23.871430 env[1205]: time="2024-12-13T14:24:23.871373554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}"
Dec 13 14:24:23.874744 kubelet[1575]: E1213 14:24:23.874624    1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:23.875166 env[1205]: time="2024-12-13T14:24:23.875092960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}"
Dec 13 14:24:24.069871 kubelet[1575]: E1213 14:24:24.069734    1575 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:24.233925 kubelet[1575]: I1213 14:24:24.233892    1575 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Dec 13 14:24:24.234273 kubelet[1575]: E1213 14:24:24.234243    1575 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost"
Dec 13 14:24:24.501821 kubelet[1575]: W1213 14:24:24.501734    1575 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused
Dec 13 14:24:24.501821 kubelet[1575]: E1213 14:24:24.501813    1575 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:25.025301 kubelet[1575]: E1213 14:24:25.025224    1575 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="3.2s"
Dec 13 14:24:25.035314 kubelet[1575]: I1213 14:24:25.035271    1575 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Dec 13 14:24:25.035477 kubelet[1575]: E1213 14:24:25.035456    1575 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost"
Dec 13 14:24:25.071870 kubelet[1575]: W1213 14:24:25.071807    1575 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused
Dec 13 14:24:25.071870 kubelet[1575]: E1213 14:24:25.071862    1575 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:25.370217 kubelet[1575]: W1213 14:24:25.370144    1575 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused
Dec 13 14:24:25.370217 kubelet[1575]: E1213 14:24:25.370201    1575 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:25.440343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201731592.mount: Deactivated successfully.
Dec 13 14:24:25.446264 env[1205]: time="2024-12-13T14:24:25.446175038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.451697 env[1205]: time="2024-12-13T14:24:25.451634430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.452907 env[1205]: time="2024-12-13T14:24:25.452852627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.453906 env[1205]: time="2024-12-13T14:24:25.453849684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.457033 env[1205]: time="2024-12-13T14:24:25.456861220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.458168 env[1205]: time="2024-12-13T14:24:25.458125164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.459766 env[1205]: time="2024-12-13T14:24:25.459723524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.461633 env[1205]: time="2024-12-13T14:24:25.461578441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.462738 env[1205]: time="2024-12-13T14:24:25.462675587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.465174 env[1205]: time="2024-12-13T14:24:25.465139883Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.466693 env[1205]: time="2024-12-13T14:24:25.466658842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.474821 env[1205]: time="2024-12-13T14:24:25.474748386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:25.506440 env[1205]: time="2024-12-13T14:24:25.506335495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:24:25.506440 env[1205]: time="2024-12-13T14:24:25.506387954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:24:25.506440 env[1205]: time="2024-12-13T14:24:25.506397853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:24:25.507179 env[1205]: time="2024-12-13T14:24:25.506612231Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee81749757e201394bdb989ceba305e8ab55b32b919816b6c2c22b5f391e8fc0 pid=1617 runtime=io.containerd.runc.v2
Dec 13 14:24:25.510644 env[1205]: time="2024-12-13T14:24:25.510567833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:24:25.510644 env[1205]: time="2024-12-13T14:24:25.510623289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:24:25.510644 env[1205]: time="2024-12-13T14:24:25.510634179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:24:25.510963 env[1205]: time="2024-12-13T14:24:25.510926776Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43fb742b713b2158c1a1dc0582ab8b4dfe5a3839bfd805d4b2463bd9713e82f0 pid=1623 runtime=io.containerd.runc.v2
Dec 13 14:24:25.531543 env[1205]: time="2024-12-13T14:24:25.531458510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:24:25.531786 env[1205]: time="2024-12-13T14:24:25.531751999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:24:25.531906 env[1205]: time="2024-12-13T14:24:25.531874962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:24:25.532163 env[1205]: time="2024-12-13T14:24:25.532135658Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/972dd77f5cc3d2c13503f3d2b458104a61bf0c33a6a989c17e322d4e22696693 pid=1663 runtime=io.containerd.runc.v2
Dec 13 14:24:25.536439 systemd[1]: Started cri-containerd-43fb742b713b2158c1a1dc0582ab8b4dfe5a3839bfd805d4b2463bd9713e82f0.scope.
Dec 13 14:24:25.552887 systemd[1]: Started cri-containerd-ee81749757e201394bdb989ceba305e8ab55b32b919816b6c2c22b5f391e8fc0.scope.
Dec 13 14:24:25.582916 systemd[1]: Started cri-containerd-972dd77f5cc3d2c13503f3d2b458104a61bf0c33a6a989c17e322d4e22696693.scope.
Dec 13 14:24:25.642037 env[1205]: time="2024-12-13T14:24:25.641885117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"43fb742b713b2158c1a1dc0582ab8b4dfe5a3839bfd805d4b2463bd9713e82f0\""
Dec 13 14:24:25.646364 kubelet[1575]: E1213 14:24:25.646322    1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:25.649623 env[1205]: time="2024-12-13T14:24:25.649480911Z" level=info msg="CreateContainer within sandbox \"43fb742b713b2158c1a1dc0582ab8b4dfe5a3839bfd805d4b2463bd9713e82f0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Dec 13 14:24:25.654446 env[1205]: time="2024-12-13T14:24:25.654408903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:30df36927e62c7a93a288b9f2b63fd9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee81749757e201394bdb989ceba305e8ab55b32b919816b6c2c22b5f391e8fc0\""
Dec 13 14:24:25.656382 kubelet[1575]: E1213 14:24:25.656108    1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:25.658394 env[1205]: time="2024-12-13T14:24:25.658366049Z" level=info msg="CreateContainer within sandbox \"ee81749757e201394bdb989ceba305e8ab55b32b919816b6c2c22b5f391e8fc0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Dec 13 14:24:25.664534 env[1205]: time="2024-12-13T14:24:25.664485756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"972dd77f5cc3d2c13503f3d2b458104a61bf0c33a6a989c17e322d4e22696693\""
Dec 13 14:24:25.667017 kubelet[1575]: E1213 14:24:25.666982    1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:25.671192 env[1205]: time="2024-12-13T14:24:25.671147926Z" level=info msg="CreateContainer within sandbox \"972dd77f5cc3d2c13503f3d2b458104a61bf0c33a6a989c17e322d4e22696693\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Dec 13 14:24:25.685519 env[1205]: time="2024-12-13T14:24:25.685471685Z" level=info msg="CreateContainer within sandbox \"ee81749757e201394bdb989ceba305e8ab55b32b919816b6c2c22b5f391e8fc0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6b8fa74804f57dce40fbab263e7c563ca3f00e15c17c5231832349019dccac5d\""
Dec 13 14:24:25.686547 env[1205]: time="2024-12-13T14:24:25.686521952Z" level=info msg="CreateContainer within sandbox \"43fb742b713b2158c1a1dc0582ab8b4dfe5a3839bfd805d4b2463bd9713e82f0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9beee5ff47fd2cb30e366cc9ee033d72b4ba68840dd319957f8793affea6aa40\""
Dec 13 14:24:25.686695 env[1205]: time="2024-12-13T14:24:25.686533785Z" level=info msg="StartContainer for \"6b8fa74804f57dce40fbab263e7c563ca3f00e15c17c5231832349019dccac5d\""
Dec 13 14:24:25.687543 env[1205]: time="2024-12-13T14:24:25.687492598Z" level=info msg="StartContainer for \"9beee5ff47fd2cb30e366cc9ee033d72b4ba68840dd319957f8793affea6aa40\""
Dec 13 14:24:25.701793 env[1205]: time="2024-12-13T14:24:25.701736295Z" level=info msg="CreateContainer within sandbox \"972dd77f5cc3d2c13503f3d2b458104a61bf0c33a6a989c17e322d4e22696693\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"12e267fb443ed9d0e00253f29bc768c08677246105a53d154f0e1869b7ca2d24\""
Dec 13 14:24:25.702547 env[1205]: time="2024-12-13T14:24:25.702508333Z" level=info msg="StartContainer for \"12e267fb443ed9d0e00253f29bc768c08677246105a53d154f0e1869b7ca2d24\""
Dec 13 14:24:25.707816 systemd[1]: Started cri-containerd-6b8fa74804f57dce40fbab263e7c563ca3f00e15c17c5231832349019dccac5d.scope.
Dec 13 14:24:25.713863 systemd[1]: Started cri-containerd-9beee5ff47fd2cb30e366cc9ee033d72b4ba68840dd319957f8793affea6aa40.scope.
Dec 13 14:24:25.727572 systemd[1]: Started cri-containerd-12e267fb443ed9d0e00253f29bc768c08677246105a53d154f0e1869b7ca2d24.scope.
Dec 13 14:24:25.767330 env[1205]: time="2024-12-13T14:24:25.767271141Z" level=info msg="StartContainer for \"9beee5ff47fd2cb30e366cc9ee033d72b4ba68840dd319957f8793affea6aa40\" returns successfully"
Dec 13 14:24:25.768407 env[1205]: time="2024-12-13T14:24:25.768367346Z" level=info msg="StartContainer for \"6b8fa74804f57dce40fbab263e7c563ca3f00e15c17c5231832349019dccac5d\" returns successfully"
Dec 13 14:24:25.790511 env[1205]: time="2024-12-13T14:24:25.790455961Z" level=info msg="StartContainer for \"12e267fb443ed9d0e00253f29bc768c08677246105a53d154f0e1869b7ca2d24\" returns successfully"
Dec 13 14:24:26.046018 kubelet[1575]: E1213 14:24:26.045420    1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:26.048170 kubelet[1575]: E1213 14:24:26.048157    1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:26.049896 kubelet[1575]: E1213 14:24:26.049873    1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:26.638426 kubelet[1575]: I1213 14:24:26.637856    1575 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Dec 13 14:24:27.052668 kubelet[1575]: E1213 14:24:27.052531    1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:27.405035 kubelet[1575]: I1213 14:24:27.404985    1575 kubelet_node_status.go:75] "Successfully registered node" node="localhost"
Dec 13 14:24:27.405035 kubelet[1575]: E1213 14:24:27.405023    1575 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found"
Dec 13 14:24:27.413953 kubelet[1575]: E1213 14:24:27.413895    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:27.514919 kubelet[1575]: E1213 14:24:27.514847    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:27.616023 kubelet[1575]: E1213 14:24:27.615954    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:27.716746 kubelet[1575]: E1213 14:24:27.716601    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:27.797927 kubelet[1575]: E1213 14:24:27.797894    1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:27.816960 kubelet[1575]: E1213 14:24:27.816935    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:27.917971 kubelet[1575]: E1213 14:24:27.917915    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:28.018860 kubelet[1575]: E1213 14:24:28.018719    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:28.119727 kubelet[1575]: E1213 14:24:28.119670    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:28.219999 kubelet[1575]: E1213 14:24:28.219930    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:28.320557 kubelet[1575]: E1213 14:24:28.320386    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:28.421010 kubelet[1575]: E1213 14:24:28.420952    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:28.521123 kubelet[1575]: E1213 14:24:28.521082    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:28.621725 kubelet[1575]: E1213 14:24:28.621670    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:28.722280 kubelet[1575]: E1213 14:24:28.722220    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:28.822949 kubelet[1575]: E1213 14:24:28.822888    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:28.923817 kubelet[1575]: E1213 14:24:28.923715    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:29.024214 kubelet[1575]: E1213 14:24:29.024151    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:29.124964 kubelet[1575]: E1213 14:24:29.124894    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:29.225784 kubelet[1575]: E1213 14:24:29.225610    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:29.277549 update_engine[1190]: I1213 14:24:29.277454  1190 update_attempter.cc:509] Updating boot flags...
Dec 13 14:24:29.326268 kubelet[1575]: E1213 14:24:29.326207    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:29.428182 kubelet[1575]: E1213 14:24:29.426820    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:29.527676 kubelet[1575]: E1213 14:24:29.527518    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:29.754137 systemd[1]: Reloading.
Dec 13 14:24:29.827144 /usr/lib/systemd/system-generators/torcx-generator[1888]: time="2024-12-13T14:24:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 14:24:29.827179 /usr/lib/systemd/system-generators/torcx-generator[1888]: time="2024-12-13T14:24:29Z" level=info msg="torcx already run"
Dec 13 14:24:29.899022 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 14:24:29.899046 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 14:24:29.924091 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 14:24:29.988812 kubelet[1575]: I1213 14:24:29.988761    1575 apiserver.go:52] "Watching apiserver"
Dec 13 14:24:30.018573 kubelet[1575]: I1213 14:24:30.018536    1575 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Dec 13 14:24:30.031680 systemd[1]: Stopping kubelet.service...
Dec 13 14:24:30.058500 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 14:24:30.058664 systemd[1]: Stopped kubelet.service.
Dec 13 14:24:30.058706 systemd[1]: kubelet.service: Consumed 1.127s CPU time.
Dec 13 14:24:30.060155 systemd[1]: Starting kubelet.service...
Dec 13 14:24:30.139121 systemd[1]: Started kubelet.service.
Dec 13 14:24:30.193391 kubelet[1935]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 14:24:30.193391 kubelet[1935]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 14:24:30.193391 kubelet[1935]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 14:24:30.193780 kubelet[1935]: I1213 14:24:30.193450    1935 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 14:24:30.198519 kubelet[1935]: I1213 14:24:30.198491    1935 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Dec 13 14:24:30.198519 kubelet[1935]: I1213 14:24:30.198510    1935 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 14:24:30.199267 kubelet[1935]: I1213 14:24:30.198696    1935 server.go:929] "Client rotation is on, will bootstrap in background"
Dec 13 14:24:30.204673 kubelet[1935]: I1213 14:24:30.204635    1935 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 13 14:24:30.207298 kubelet[1935]: I1213 14:24:30.207270    1935 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 14:24:30.210755 kubelet[1935]: E1213 14:24:30.210714    1935 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Dec 13 14:24:30.210755 kubelet[1935]: I1213 14:24:30.210743    1935 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Dec 13 14:24:30.214455 kubelet[1935]: I1213 14:24:30.214425    1935 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 14:24:30.214544 kubelet[1935]: I1213 14:24:30.214523    1935 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Dec 13 14:24:30.214644 kubelet[1935]: I1213 14:24:30.214611    1935 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 14:24:30.214802 kubelet[1935]: I1213 14:24:30.214640    1935 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Dec 13 14:24:30.214891 kubelet[1935]: I1213 14:24:30.214803    1935 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 14:24:30.214891 kubelet[1935]: I1213 14:24:30.214812    1935 container_manager_linux.go:300] "Creating device plugin manager"
Dec 13 14:24:30.214891 kubelet[1935]: I1213 14:24:30.214843    1935 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 14:24:30.214968 kubelet[1935]: I1213 14:24:30.214914    1935 kubelet.go:408] "Attempting to sync node with API server"
Dec 13 14:24:30.214968 kubelet[1935]: I1213 14:24:30.214925    1935 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 14:24:30.214968 kubelet[1935]: I1213 14:24:30.214947    1935 kubelet.go:314] "Adding apiserver pod source"
Dec 13 14:24:30.214968 kubelet[1935]: I1213 14:24:30.214959    1935 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 14:24:30.215711 kubelet[1935]: I1213 14:24:30.215676    1935 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Dec 13 14:24:30.217128 kubelet[1935]: I1213 14:24:30.216012    1935 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 14:24:30.220071 kubelet[1935]: I1213 14:24:30.220030    1935 server.go:1269] "Started kubelet"
Dec 13 14:24:30.221775 kubelet[1935]: I1213 14:24:30.221747    1935 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 14:24:30.226179 kubelet[1935]: I1213 14:24:30.226139    1935 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 14:24:30.228883 kubelet[1935]: I1213 14:24:30.228818    1935 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 14:24:30.229470 kubelet[1935]: I1213 14:24:30.229430    1935 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Dec 13 14:24:30.229739 kubelet[1935]: I1213 14:24:30.229718    1935 volume_manager.go:289] "Starting Kubelet Volume Manager"
Dec 13 14:24:30.230130 kubelet[1935]: I1213 14:24:30.230113    1935 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Dec 13 14:24:30.230236 kubelet[1935]: I1213 14:24:30.230221    1935 reconciler.go:26] "Reconciler: start to sync state"
Dec 13 14:24:30.230905 kubelet[1935]: I1213 14:24:30.230864    1935 server.go:460] "Adding debug handlers to kubelet server"
Dec 13 14:24:30.231115 kubelet[1935]: E1213 14:24:30.231097    1935 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 14:24:30.231364 kubelet[1935]: I1213 14:24:30.231348    1935 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 14:24:30.235273 kubelet[1935]: I1213 14:24:30.235180    1935 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 14:24:30.237714 kubelet[1935]: E1213 14:24:30.236497    1935 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 14:24:30.237714 kubelet[1935]: I1213 14:24:30.236650    1935 factory.go:221] Registration of the containerd container factory successfully
Dec 13 14:24:30.237714 kubelet[1935]: I1213 14:24:30.236659    1935 factory.go:221] Registration of the systemd container factory successfully
Dec 13 14:24:30.241213 kubelet[1935]: I1213 14:24:30.241156    1935 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 14:24:30.242542 kubelet[1935]: I1213 14:24:30.242499    1935 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 14:24:30.242542 kubelet[1935]: I1213 14:24:30.242538    1935 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 14:24:30.242726 kubelet[1935]: I1213 14:24:30.242557    1935 kubelet.go:2321] "Starting kubelet main sync loop"
Dec 13 14:24:30.242726 kubelet[1935]: E1213 14:24:30.242603    1935 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 14:24:30.269216 kubelet[1935]: I1213 14:24:30.269174    1935 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 14:24:30.269216 kubelet[1935]: I1213 14:24:30.269195    1935 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 14:24:30.269216 kubelet[1935]: I1213 14:24:30.269215    1935 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 14:24:30.269429 kubelet[1935]: I1213 14:24:30.269381    1935 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Dec 13 14:24:30.269429 kubelet[1935]: I1213 14:24:30.269391    1935 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Dec 13 14:24:30.269429 kubelet[1935]: I1213 14:24:30.269410    1935 policy_none.go:49] "None policy: Start"
Dec 13 14:24:30.269973 kubelet[1935]: I1213 14:24:30.269946    1935 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 14:24:30.270023 kubelet[1935]: I1213 14:24:30.269981    1935 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 14:24:30.270226 kubelet[1935]: I1213 14:24:30.270208    1935 state_mem.go:75] "Updated machine memory state"
Dec 13 14:24:30.275117 kubelet[1935]: I1213 14:24:30.274726    1935 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 14:24:30.275117 kubelet[1935]: I1213 14:24:30.274917    1935 eviction_manager.go:189] "Eviction manager: starting control loop"
Dec 13 14:24:30.275117 kubelet[1935]: I1213 14:24:30.274927    1935 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Dec 13 14:24:30.275462 kubelet[1935]: I1213 14:24:30.275438    1935 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 14:24:30.380180 kubelet[1935]: I1213 14:24:30.380133    1935 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Dec 13 14:24:30.386900 kubelet[1935]: I1213 14:24:30.386866    1935 kubelet_node_status.go:111] "Node was previously registered" node="localhost"
Dec 13 14:24:30.387094 kubelet[1935]: I1213 14:24:30.386953    1935 kubelet_node_status.go:75] "Successfully registered node" node="localhost"
Dec 13 14:24:30.531640 kubelet[1935]: I1213 14:24:30.531460    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 14:24:30.531640 kubelet[1935]: I1213 14:24:30.531531    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 14:24:30.531640 kubelet[1935]: I1213 14:24:30.531593    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost"
Dec 13 14:24:30.531640 kubelet[1935]: I1213 14:24:30.531612    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30df36927e62c7a93a288b9f2b63fd9f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"30df36927e62c7a93a288b9f2b63fd9f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 14:24:30.531640 kubelet[1935]: I1213 14:24:30.531629    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30df36927e62c7a93a288b9f2b63fd9f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"30df36927e62c7a93a288b9f2b63fd9f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 14:24:30.531976 kubelet[1935]: I1213 14:24:30.531644    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 14:24:30.531976 kubelet[1935]: I1213 14:24:30.531662    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 14:24:30.531976 kubelet[1935]: I1213 14:24:30.531677    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30df36927e62c7a93a288b9f2b63fd9f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"30df36927e62c7a93a288b9f2b63fd9f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 14:24:30.531976 kubelet[1935]: I1213 14:24:30.531693    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 14:24:30.654233 kubelet[1935]: E1213 14:24:30.654178    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:30.655830 kubelet[1935]: E1213 14:24:30.654437    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:30.655830 kubelet[1935]: E1213 14:24:30.654678    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:30.753743 sudo[1968]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Dec 13 14:24:30.754030 sudo[1968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 13 14:24:31.215820 kubelet[1935]: I1213 14:24:31.215784    1935 apiserver.go:52] "Watching apiserver"
Dec 13 14:24:31.230959 kubelet[1935]: I1213 14:24:31.230915    1935 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Dec 13 14:24:31.241861 sudo[1968]: pam_unix(sudo:session): session closed for user root
Dec 13 14:24:31.252554 kubelet[1935]: E1213 14:24:31.252519    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:31.253118 kubelet[1935]: E1213 14:24:31.253094    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:31.253279 kubelet[1935]: E1213 14:24:31.253257    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:31.280560 kubelet[1935]: I1213 14:24:31.280400    1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.2803715439999999 podStartE2EDuration="1.280371544s" podCreationTimestamp="2024-12-13 14:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:24:31.272364996 +0000 UTC m=+1.126873682" watchObservedRunningTime="2024-12-13 14:24:31.280371544 +0000 UTC m=+1.134880230"
Dec 13 14:24:31.281118 kubelet[1935]: I1213 14:24:31.281088    1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2810790939999999 podStartE2EDuration="1.281079094s" podCreationTimestamp="2024-12-13 14:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:24:31.280215088 +0000 UTC m=+1.134723774" watchObservedRunningTime="2024-12-13 14:24:31.281079094 +0000 UTC m=+1.135587780"
Dec 13 14:24:32.254164 kubelet[1935]: E1213 14:24:32.254119    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:33.212527 sudo[1301]: pam_unix(sudo:session): session closed for user root
Dec 13 14:24:33.213791 sshd[1298]: pam_unix(sshd:session): session closed for user core
Dec 13 14:24:33.215879 systemd[1]: sshd@4-10.0.0.77:22-10.0.0.1:52552.service: Deactivated successfully.
Dec 13 14:24:33.216682 systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 14:24:33.216827 systemd[1]: session-5.scope: Consumed 4.788s CPU time.
Dec 13 14:24:33.217224 systemd-logind[1187]: Session 5 logged out. Waiting for processes to exit.
Dec 13 14:24:33.217863 systemd-logind[1187]: Removed session 5.
Dec 13 14:24:33.420585 kubelet[1935]: E1213 14:24:33.420509    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:34.588235 kubelet[1935]: I1213 14:24:34.588176    1935 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Dec 13 14:24:34.588591 env[1205]: time="2024-12-13T14:24:34.588523517Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Dec 13 14:24:34.588757 kubelet[1935]: I1213 14:24:34.588689    1935 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Dec 13 14:24:35.108204 kubelet[1935]: I1213 14:24:35.108140    1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.108113647 podStartE2EDuration="5.108113647s" podCreationTimestamp="2024-12-13 14:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:24:31.292052979 +0000 UTC m=+1.146561665" watchObservedRunningTime="2024-12-13 14:24:35.108113647 +0000 UTC m=+4.962622333"
Dec 13 14:24:35.114680 systemd[1]: Created slice kubepods-besteffort-podaa8cb5b9_f1a7_44a6_a42a_8976761947ad.slice.
Dec 13 14:24:35.158630 kubelet[1935]: I1213 14:24:35.158557    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa8cb5b9-f1a7-44a6-a42a-8976761947ad-cilium-config-path\") pod \"cilium-operator-5d85765b45-w85br\" (UID: \"aa8cb5b9-f1a7-44a6-a42a-8976761947ad\") " pod="kube-system/cilium-operator-5d85765b45-w85br"
Dec 13 14:24:35.158630 kubelet[1935]: I1213 14:24:35.158617    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmplf\" (UniqueName: \"kubernetes.io/projected/aa8cb5b9-f1a7-44a6-a42a-8976761947ad-kube-api-access-fmplf\") pod \"cilium-operator-5d85765b45-w85br\" (UID: \"aa8cb5b9-f1a7-44a6-a42a-8976761947ad\") " pod="kube-system/cilium-operator-5d85765b45-w85br"
Dec 13 14:24:35.264119 kubelet[1935]: I1213 14:24:35.264041    1935 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
Dec 13 14:24:35.410747 systemd[1]: Created slice kubepods-besteffort-pode6385f6b_4bf7_4361_9760_69f2e5201df2.slice.
Dec 13 14:24:35.419783 systemd[1]: Created slice kubepods-burstable-poda00218d7_4562_41ff_a855_272aed7c022c.slice.
Dec 13 14:24:35.424099 kubelet[1935]: E1213 14:24:35.424054    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:35.424728 env[1205]: time="2024-12-13T14:24:35.424685673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-w85br,Uid:aa8cb5b9-f1a7-44a6-a42a-8976761947ad,Namespace:kube-system,Attempt:0,}"
Dec 13 14:24:35.460309 kubelet[1935]: I1213 14:24:35.460253    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-lib-modules\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460309 kubelet[1935]: I1213 14:24:35.460300    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6385f6b-4bf7-4361-9760-69f2e5201df2-kube-proxy\") pod \"kube-proxy-plrfx\" (UID: \"e6385f6b-4bf7-4361-9760-69f2e5201df2\") " pod="kube-system/kube-proxy-plrfx"
Dec 13 14:24:35.460551 kubelet[1935]: I1213 14:24:35.460326    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cilium-cgroup\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460551 kubelet[1935]: I1213 14:24:35.460345    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cni-path\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460551 kubelet[1935]: I1213 14:24:35.460364    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a00218d7-4562-41ff-a855-272aed7c022c-clustermesh-secrets\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460551 kubelet[1935]: I1213 14:24:35.460382    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a00218d7-4562-41ff-a855-272aed7c022c-cilium-config-path\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460551 kubelet[1935]: I1213 14:24:35.460403    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-xtables-lock\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460551 kubelet[1935]: I1213 14:24:35.460458    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-host-proc-sys-net\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460766 kubelet[1935]: I1213 14:24:35.460497    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a00218d7-4562-41ff-a855-272aed7c022c-hubble-tls\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460766 kubelet[1935]: I1213 14:24:35.460528    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn858\" (UniqueName: \"kubernetes.io/projected/e6385f6b-4bf7-4361-9760-69f2e5201df2-kube-api-access-nn858\") pod \"kube-proxy-plrfx\" (UID: \"e6385f6b-4bf7-4361-9760-69f2e5201df2\") " pod="kube-system/kube-proxy-plrfx"
Dec 13 14:24:35.460766 kubelet[1935]: I1213 14:24:35.460544    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-host-proc-sys-kernel\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460766 kubelet[1935]: I1213 14:24:35.460558    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tnf5\" (UniqueName: \"kubernetes.io/projected/a00218d7-4562-41ff-a855-272aed7c022c-kube-api-access-8tnf5\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460766 kubelet[1935]: I1213 14:24:35.460572    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6385f6b-4bf7-4361-9760-69f2e5201df2-xtables-lock\") pod \"kube-proxy-plrfx\" (UID: \"e6385f6b-4bf7-4361-9760-69f2e5201df2\") " pod="kube-system/kube-proxy-plrfx"
Dec 13 14:24:35.460891 kubelet[1935]: I1213 14:24:35.460587    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-hostproc\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460891 kubelet[1935]: I1213 14:24:35.460602    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-bpf-maps\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460891 kubelet[1935]: I1213 14:24:35.460622    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cilium-run\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460891 kubelet[1935]: I1213 14:24:35.460638    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-etc-cni-netd\") pod \"cilium-76nvq\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") " pod="kube-system/cilium-76nvq"
Dec 13 14:24:35.460891 kubelet[1935]: I1213 14:24:35.460665    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6385f6b-4bf7-4361-9760-69f2e5201df2-lib-modules\") pod \"kube-proxy-plrfx\" (UID: \"e6385f6b-4bf7-4361-9760-69f2e5201df2\") " pod="kube-system/kube-proxy-plrfx"
Dec 13 14:24:35.666352 env[1205]: time="2024-12-13T14:24:35.665605310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:24:35.666352 env[1205]: time="2024-12-13T14:24:35.665648802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:24:35.666701 env[1205]: time="2024-12-13T14:24:35.665659142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:24:35.666701 env[1205]: time="2024-12-13T14:24:35.665828352Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766 pid=2031 runtime=io.containerd.runc.v2
Dec 13 14:24:35.697748 systemd[1]: Started cri-containerd-de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766.scope.
Dec 13 14:24:35.713713 kubelet[1935]: E1213 14:24:35.713402    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:35.714054 env[1205]: time="2024-12-13T14:24:35.713856234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-plrfx,Uid:e6385f6b-4bf7-4361-9760-69f2e5201df2,Namespace:kube-system,Attempt:0,}"
Dec 13 14:24:35.722463 kubelet[1935]: E1213 14:24:35.721605    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:35.723506 env[1205]: time="2024-12-13T14:24:35.723455298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-76nvq,Uid:a00218d7-4562-41ff-a855-272aed7c022c,Namespace:kube-system,Attempt:0,}"
Dec 13 14:24:35.770284 env[1205]: time="2024-12-13T14:24:35.770039443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:24:35.770284 env[1205]: time="2024-12-13T14:24:35.770108694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:24:35.770284 env[1205]: time="2024-12-13T14:24:35.770121329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:24:35.770533 kubelet[1935]: E1213 14:24:35.770460    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:35.770731 env[1205]: time="2024-12-13T14:24:35.770663262Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9a0def15f58258f9170d4ae46443b2348696889fb2ae8c6703064540e901a79 pid=2064 runtime=io.containerd.runc.v2
Dec 13 14:24:35.785241 env[1205]: time="2024-12-13T14:24:35.785171711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-w85br,Uid:aa8cb5b9-f1a7-44a6-a42a-8976761947ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766\""
Dec 13 14:24:35.786137 kubelet[1935]: E1213 14:24:35.786101    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:35.788662 env[1205]: time="2024-12-13T14:24:35.788618934Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Dec 13 14:24:35.795151 systemd[1]: Started cri-containerd-c9a0def15f58258f9170d4ae46443b2348696889fb2ae8c6703064540e901a79.scope.
Dec 13 14:24:35.822514 env[1205]: time="2024-12-13T14:24:35.822434165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-plrfx,Uid:e6385f6b-4bf7-4361-9760-69f2e5201df2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9a0def15f58258f9170d4ae46443b2348696889fb2ae8c6703064540e901a79\""
Dec 13 14:24:35.823306 kubelet[1935]: E1213 14:24:35.823263    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:35.825200 env[1205]: time="2024-12-13T14:24:35.825161418Z" level=info msg="CreateContainer within sandbox \"c9a0def15f58258f9170d4ae46443b2348696889fb2ae8c6703064540e901a79\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Dec 13 14:24:35.989698 env[1205]: time="2024-12-13T14:24:35.989531440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:24:35.989698 env[1205]: time="2024-12-13T14:24:35.989584089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:24:35.989698 env[1205]: time="2024-12-13T14:24:35.989604929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:24:35.989921 env[1205]: time="2024-12-13T14:24:35.989755513Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92 pid=2109 runtime=io.containerd.runc.v2
Dec 13 14:24:36.000883 systemd[1]: Started cri-containerd-ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92.scope.
Dec 13 14:24:36.028745 env[1205]: time="2024-12-13T14:24:36.028673912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-76nvq,Uid:a00218d7-4562-41ff-a855-272aed7c022c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\""
Dec 13 14:24:36.029240 kubelet[1935]: E1213 14:24:36.029206    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:36.217512 env[1205]: time="2024-12-13T14:24:36.217416365Z" level=info msg="CreateContainer within sandbox \"c9a0def15f58258f9170d4ae46443b2348696889fb2ae8c6703064540e901a79\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"27dac9294a211ffe5ee3619d54b82ece17aa6703fb963f9e1816d5895450433d\""
Dec 13 14:24:36.218593 env[1205]: time="2024-12-13T14:24:36.218517504Z" level=info msg="StartContainer for \"27dac9294a211ffe5ee3619d54b82ece17aa6703fb963f9e1816d5895450433d\""
Dec 13 14:24:36.236559 systemd[1]: Started cri-containerd-27dac9294a211ffe5ee3619d54b82ece17aa6703fb963f9e1816d5895450433d.scope.
Dec 13 14:24:36.264678 kubelet[1935]: E1213 14:24:36.264531    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:36.275855 env[1205]: time="2024-12-13T14:24:36.275805461Z" level=info msg="StartContainer for \"27dac9294a211ffe5ee3619d54b82ece17aa6703fb963f9e1816d5895450433d\" returns successfully"
Dec 13 14:24:37.267454 kubelet[1935]: E1213 14:24:37.267422    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:37.314693 kubelet[1935]: I1213 14:24:37.314606    1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-plrfx" podStartSLOduration=2.314584124 podStartE2EDuration="2.314584124s" podCreationTimestamp="2024-12-13 14:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:24:37.314480679 +0000 UTC m=+7.168989365" watchObservedRunningTime="2024-12-13 14:24:37.314584124 +0000 UTC m=+7.169092810"
Dec 13 14:24:37.459366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974154828.mount: Deactivated successfully.
Dec 13 14:24:38.269516 kubelet[1935]: E1213 14:24:38.269451    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:38.510265 env[1205]: time="2024-12-13T14:24:38.510199963Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:38.536100 env[1205]: time="2024-12-13T14:24:38.535939077Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:38.538891 env[1205]: time="2024-12-13T14:24:38.538824511Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:38.539296 env[1205]: time="2024-12-13T14:24:38.539258369Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Dec 13 14:24:38.540471 env[1205]: time="2024-12-13T14:24:38.540427956Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Dec 13 14:24:38.541496 env[1205]: time="2024-12-13T14:24:38.541459713Z" level=info msg="CreateContainer within sandbox \"de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Dec 13 14:24:38.596846 env[1205]: time="2024-12-13T14:24:38.596761101Z" level=info msg="CreateContainer within sandbox \"de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\""
Dec 13 14:24:38.597482 env[1205]: time="2024-12-13T14:24:38.597440463Z" level=info msg="StartContainer for \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\""
Dec 13 14:24:38.619378 systemd[1]: run-containerd-runc-k8s.io-cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260-runc.cZdhpw.mount: Deactivated successfully.
Dec 13 14:24:38.626336 systemd[1]: Started cri-containerd-cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260.scope.
Dec 13 14:24:38.980667 env[1205]: time="2024-12-13T14:24:38.980594563Z" level=info msg="StartContainer for \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\" returns successfully"
Dec 13 14:24:39.212924 kubelet[1935]: E1213 14:24:39.212867    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:39.272821 kubelet[1935]: E1213 14:24:39.272708    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:39.273806 kubelet[1935]: E1213 14:24:39.273746    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:39.332497 kubelet[1935]: I1213 14:24:39.332443    1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-w85br" podStartSLOduration=1.580313984 podStartE2EDuration="4.332419116s" podCreationTimestamp="2024-12-13 14:24:35 +0000 UTC" firstStartedPulling="2024-12-13 14:24:35.788115202 +0000 UTC m=+5.642623888" lastFinishedPulling="2024-12-13 14:24:38.540220334 +0000 UTC m=+8.394729020" observedRunningTime="2024-12-13 14:24:39.312211965 +0000 UTC m=+9.166720651" watchObservedRunningTime="2024-12-13 14:24:39.332419116 +0000 UTC m=+9.186927802"
Dec 13 14:24:40.274558 kubelet[1935]: E1213 14:24:40.274504    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:40.274974 kubelet[1935]: E1213 14:24:40.274583    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:43.425470 kubelet[1935]: E1213 14:24:43.425427    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:45.814954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652293426.mount: Deactivated successfully.
Dec 13 14:24:49.927364 env[1205]: time="2024-12-13T14:24:49.927297598Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:49.929529 env[1205]: time="2024-12-13T14:24:49.929458542Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:49.931390 env[1205]: time="2024-12-13T14:24:49.931324922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:49.931933 env[1205]: time="2024-12-13T14:24:49.931884294Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Dec 13 14:24:49.936293 env[1205]: time="2024-12-13T14:24:49.936242673Z" level=info msg="CreateContainer within sandbox \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Dec 13 14:24:49.949240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2700504601.mount: Deactivated successfully.
Dec 13 14:24:49.950750 env[1205]: time="2024-12-13T14:24:49.950705376Z" level=info msg="CreateContainer within sandbox \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3\""
Dec 13 14:24:49.951281 env[1205]: time="2024-12-13T14:24:49.951255872Z" level=info msg="StartContainer for \"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3\""
Dec 13 14:24:49.970962 systemd[1]: Started cri-containerd-c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3.scope.
Dec 13 14:24:50.007856 systemd[1]: cri-containerd-c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3.scope: Deactivated successfully.
Dec 13 14:24:50.709385 env[1205]: time="2024-12-13T14:24:50.709320016Z" level=info msg="StartContainer for \"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3\" returns successfully"
Dec 13 14:24:50.792947 env[1205]: time="2024-12-13T14:24:50.792868036Z" level=info msg="shim disconnected" id=c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3
Dec 13 14:24:50.792947 env[1205]: time="2024-12-13T14:24:50.792921215Z" level=warning msg="cleaning up after shim disconnected" id=c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3 namespace=k8s.io
Dec 13 14:24:50.792947 env[1205]: time="2024-12-13T14:24:50.792931035Z" level=info msg="cleaning up dead shim"
Dec 13 14:24:50.802848 env[1205]: time="2024-12-13T14:24:50.802781168Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:24:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2397 runtime=io.containerd.runc.v2\n"
Dec 13 14:24:50.946770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3-rootfs.mount: Deactivated successfully.
Dec 13 14:24:51.721187 kubelet[1935]: E1213 14:24:51.721041    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:51.722717 env[1205]: time="2024-12-13T14:24:51.722645380Z" level=info msg="CreateContainer within sandbox \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Dec 13 14:24:51.893198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158353284.mount: Deactivated successfully.
Dec 13 14:24:51.922224 env[1205]: time="2024-12-13T14:24:51.922178387Z" level=info msg="CreateContainer within sandbox \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded\""
Dec 13 14:24:51.922766 env[1205]: time="2024-12-13T14:24:51.922714405Z" level=info msg="StartContainer for \"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded\""
Dec 13 14:24:51.941176 systemd[1]: Started cri-containerd-96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded.scope.
Dec 13 14:24:51.946392 systemd[1]: run-containerd-runc-k8s.io-96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded-runc.7sU5sZ.mount: Deactivated successfully.
Dec 13 14:24:51.963065 env[1205]: time="2024-12-13T14:24:51.962971590Z" level=info msg="StartContainer for \"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded\" returns successfully"
Dec 13 14:24:51.974424 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 14:24:51.974687 systemd[1]: Stopped systemd-sysctl.service.
Dec 13 14:24:51.974971 systemd[1]: Stopping systemd-sysctl.service...
Dec 13 14:24:51.978353 systemd[1]: Starting systemd-sysctl.service...
Dec 13 14:24:51.981188 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 14:24:51.984684 systemd[1]: cri-containerd-96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded.scope: Deactivated successfully.
Dec 13 14:24:51.991115 systemd[1]: Finished systemd-sysctl.service.
Dec 13 14:24:51.995400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded-rootfs.mount: Deactivated successfully.
Dec 13 14:24:52.004094 env[1205]: time="2024-12-13T14:24:52.004029999Z" level=info msg="shim disconnected" id=96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded
Dec 13 14:24:52.004205 env[1205]: time="2024-12-13T14:24:52.004102786Z" level=warning msg="cleaning up after shim disconnected" id=96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded namespace=k8s.io
Dec 13 14:24:52.004205 env[1205]: time="2024-12-13T14:24:52.004112685Z" level=info msg="cleaning up dead shim"
Dec 13 14:24:52.009775 env[1205]: time="2024-12-13T14:24:52.009728813Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:24:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2462 runtime=io.containerd.runc.v2\n"
Dec 13 14:24:52.724802 kubelet[1935]: E1213 14:24:52.724728    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:52.726842 env[1205]: time="2024-12-13T14:24:52.726801496Z" level=info msg="CreateContainer within sandbox \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Dec 13 14:24:52.747453 env[1205]: time="2024-12-13T14:24:52.747394690Z" level=info msg="CreateContainer within sandbox \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052\""
Dec 13 14:24:52.747857 env[1205]: time="2024-12-13T14:24:52.747812175Z" level=info msg="StartContainer for \"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052\""
Dec 13 14:24:52.763521 systemd[1]: Started cri-containerd-f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052.scope.
Dec 13 14:24:52.793202 systemd[1]: cri-containerd-f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052.scope: Deactivated successfully.
Dec 13 14:24:52.793756 env[1205]: time="2024-12-13T14:24:52.793265475Z" level=info msg="StartContainer for \"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052\" returns successfully"
Dec 13 14:24:52.815957 env[1205]: time="2024-12-13T14:24:52.815882565Z" level=info msg="shim disconnected" id=f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052
Dec 13 14:24:52.815957 env[1205]: time="2024-12-13T14:24:52.815939872Z" level=warning msg="cleaning up after shim disconnected" id=f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052 namespace=k8s.io
Dec 13 14:24:52.815957 env[1205]: time="2024-12-13T14:24:52.815949170Z" level=info msg="cleaning up dead shim"
Dec 13 14:24:52.822053 env[1205]: time="2024-12-13T14:24:52.822025373Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:24:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2518 runtime=io.containerd.runc.v2\n"
Dec 13 14:24:52.946358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1051222381.mount: Deactivated successfully.
Dec 13 14:24:53.728220 kubelet[1935]: E1213 14:24:53.728187    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:53.730329 env[1205]: time="2024-12-13T14:24:53.730270263Z" level=info msg="CreateContainer within sandbox \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Dec 13 14:24:53.749736 env[1205]: time="2024-12-13T14:24:53.749682141Z" level=info msg="CreateContainer within sandbox \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21\""
Dec 13 14:24:53.750207 env[1205]: time="2024-12-13T14:24:53.750164778Z" level=info msg="StartContainer for \"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21\""
Dec 13 14:24:53.765824 systemd[1]: Started cri-containerd-4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21.scope.
Dec 13 14:24:53.785483 systemd[1]: cri-containerd-4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21.scope: Deactivated successfully.
Dec 13 14:24:53.793113 env[1205]: time="2024-12-13T14:24:53.793051430Z" level=info msg="StartContainer for \"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21\" returns successfully"
Dec 13 14:24:53.815016 env[1205]: time="2024-12-13T14:24:53.814953108Z" level=info msg="shim disconnected" id=4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21
Dec 13 14:24:53.815212 env[1205]: time="2024-12-13T14:24:53.815017450Z" level=warning msg="cleaning up after shim disconnected" id=4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21 namespace=k8s.io
Dec 13 14:24:53.815212 env[1205]: time="2024-12-13T14:24:53.815029092Z" level=info msg="cleaning up dead shim"
Dec 13 14:24:53.821769 env[1205]: time="2024-12-13T14:24:53.821712996Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:24:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2573 runtime=io.containerd.runc.v2\n"
Dec 13 14:24:53.946296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21-rootfs.mount: Deactivated successfully.
Dec 13 14:24:54.733481 kubelet[1935]: E1213 14:24:54.733264    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:54.735861 env[1205]: time="2024-12-13T14:24:54.735818290Z" level=info msg="CreateContainer within sandbox \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Dec 13 14:24:54.767995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173153189.mount: Deactivated successfully.
Dec 13 14:24:54.771464 env[1205]: time="2024-12-13T14:24:54.770921384Z" level=info msg="CreateContainer within sandbox \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\""
Dec 13 14:24:54.771969 env[1205]: time="2024-12-13T14:24:54.771918428Z" level=info msg="StartContainer for \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\""
Dec 13 14:24:54.787927 systemd[1]: Started cri-containerd-12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581.scope.
Dec 13 14:24:54.824341 env[1205]: time="2024-12-13T14:24:54.824249739Z" level=info msg="StartContainer for \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\" returns successfully"
Dec 13 14:24:54.894744 kubelet[1935]: I1213 14:24:54.893956    1935 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
Dec 13 14:24:54.940384 systemd[1]: Created slice kubepods-burstable-pod801f1ea3_ba1e_49c7_9354_7208b4c702bd.slice.
Dec 13 14:24:54.949296 systemd[1]: Created slice kubepods-burstable-pod9ecad733_4c89_4bc6_a858_51e46b76ae4d.slice.
Dec 13 14:24:55.002418 kubelet[1935]: I1213 14:24:55.002276    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpz85\" (UniqueName: \"kubernetes.io/projected/9ecad733-4c89-4bc6-a858-51e46b76ae4d-kube-api-access-kpz85\") pod \"coredns-6f6b679f8f-6qmrc\" (UID: \"9ecad733-4c89-4bc6-a858-51e46b76ae4d\") " pod="kube-system/coredns-6f6b679f8f-6qmrc"
Dec 13 14:24:55.002418 kubelet[1935]: I1213 14:24:55.002333    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/801f1ea3-ba1e-49c7-9354-7208b4c702bd-config-volume\") pod \"coredns-6f6b679f8f-b7454\" (UID: \"801f1ea3-ba1e-49c7-9354-7208b4c702bd\") " pod="kube-system/coredns-6f6b679f8f-b7454"
Dec 13 14:24:55.002418 kubelet[1935]: I1213 14:24:55.002353    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ecad733-4c89-4bc6-a858-51e46b76ae4d-config-volume\") pod \"coredns-6f6b679f8f-6qmrc\" (UID: \"9ecad733-4c89-4bc6-a858-51e46b76ae4d\") " pod="kube-system/coredns-6f6b679f8f-6qmrc"
Dec 13 14:24:55.002418 kubelet[1935]: I1213 14:24:55.002368    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcxzb\" (UniqueName: \"kubernetes.io/projected/801f1ea3-ba1e-49c7-9354-7208b4c702bd-kube-api-access-fcxzb\") pod \"coredns-6f6b679f8f-b7454\" (UID: \"801f1ea3-ba1e-49c7-9354-7208b4c702bd\") " pod="kube-system/coredns-6f6b679f8f-b7454"
Dec 13 14:24:55.243895 kubelet[1935]: E1213 14:24:55.243804    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:55.245510 env[1205]: time="2024-12-13T14:24:55.245145372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7454,Uid:801f1ea3-ba1e-49c7-9354-7208b4c702bd,Namespace:kube-system,Attempt:0,}"
Dec 13 14:24:55.252806 kubelet[1935]: E1213 14:24:55.252691    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:55.253331 env[1205]: time="2024-12-13T14:24:55.253292672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6qmrc,Uid:9ecad733-4c89-4bc6-a858-51e46b76ae4d,Namespace:kube-system,Attempt:0,}"
Dec 13 14:24:55.741308 kubelet[1935]: E1213 14:24:55.741275    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:55.968658 systemd[1]: Started sshd@5-10.0.0.77:22-10.0.0.1:33460.service.
Dec 13 14:24:56.015586 sshd[2766]: Accepted publickey for core from 10.0.0.1 port 33460 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:24:56.017286 sshd[2766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:24:56.020857 systemd-logind[1187]: New session 6 of user core.
Dec 13 14:24:56.021655 systemd[1]: Started session-6.scope.
Dec 13 14:24:56.162162 sshd[2766]: pam_unix(sshd:session): session closed for user core
Dec 13 14:24:56.164760 systemd[1]: sshd@5-10.0.0.77:22-10.0.0.1:33460.service: Deactivated successfully.
Dec 13 14:24:56.165504 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 14:24:56.165993 systemd-logind[1187]: Session 6 logged out. Waiting for processes to exit.
Dec 13 14:24:56.166804 systemd-logind[1187]: Removed session 6.
Dec 13 14:24:56.743220 kubelet[1935]: E1213 14:24:56.743153    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:56.826228 systemd-networkd[1032]: cilium_host: Link UP
Dec 13 14:24:56.826379 systemd-networkd[1032]: cilium_net: Link UP
Dec 13 14:24:56.826382 systemd-networkd[1032]: cilium_net: Gained carrier
Dec 13 14:24:56.826506 systemd-networkd[1032]: cilium_host: Gained carrier
Dec 13 14:24:56.830768 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Dec 13 14:24:56.830154 systemd-networkd[1032]: cilium_host: Gained IPv6LL
Dec 13 14:24:56.885247 systemd-networkd[1032]: cilium_net: Gained IPv6LL
Dec 13 14:24:56.914703 systemd-networkd[1032]: cilium_vxlan: Link UP
Dec 13 14:24:56.914711 systemd-networkd[1032]: cilium_vxlan: Gained carrier
Dec 13 14:24:57.145094 kernel: NET: Registered PF_ALG protocol family
Dec 13 14:24:57.751944 kubelet[1935]: E1213 14:24:57.751865    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:57.792011 systemd-networkd[1032]: lxc_health: Link UP
Dec 13 14:24:57.831114 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Dec 13 14:24:57.831406 systemd-networkd[1032]: lxc_health: Gained carrier
Dec 13 14:24:58.108350 systemd-networkd[1032]: cilium_vxlan: Gained IPv6LL
Dec 13 14:24:58.325688 systemd-networkd[1032]: lxcc4af9202cc3a: Link UP
Dec 13 14:24:58.334092 kernel: eth0: renamed from tmp2ad99
Dec 13 14:24:58.342561 systemd-networkd[1032]: lxc946860f32094: Link UP
Dec 13 14:24:58.352088 kernel: eth0: renamed from tmp8df19
Dec 13 14:24:58.359533 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec 13 14:24:58.359648 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc4af9202cc3a: link becomes ready
Dec 13 14:24:58.362847 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc946860f32094: link becomes ready
Dec 13 14:24:58.362223 systemd-networkd[1032]: lxcc4af9202cc3a: Gained carrier
Dec 13 14:24:58.362492 systemd-networkd[1032]: lxc946860f32094: Gained carrier
Dec 13 14:24:59.196318 systemd-networkd[1032]: lxc_health: Gained IPv6LL
Dec 13 14:24:59.388619 systemd-networkd[1032]: lxcc4af9202cc3a: Gained IPv6LL
Dec 13 14:24:59.580579 systemd-networkd[1032]: lxc946860f32094: Gained IPv6LL
Dec 13 14:24:59.723914 kubelet[1935]: E1213 14:24:59.723866    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:24:59.782743 kubelet[1935]: I1213 14:24:59.782663    1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-76nvq" podStartSLOduration=10.877808253 podStartE2EDuration="24.782635296s" podCreationTimestamp="2024-12-13 14:24:35 +0000 UTC" firstStartedPulling="2024-12-13 14:24:36.030019263 +0000 UTC m=+5.884527949" lastFinishedPulling="2024-12-13 14:24:49.934846306 +0000 UTC m=+19.789354992" observedRunningTime="2024-12-13 14:24:55.760484975 +0000 UTC m=+25.614993691" watchObservedRunningTime="2024-12-13 14:24:59.782635296 +0000 UTC m=+29.637144012"
Dec 13 14:25:01.059600 kubelet[1935]: I1213 14:25:01.059550    1935 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 14:25:01.060220 kubelet[1935]: E1213 14:25:01.059988    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:25:01.166843 systemd[1]: Started sshd@6-10.0.0.77:22-10.0.0.1:59658.service.
Dec 13 14:25:01.211498 sshd[3162]: Accepted publickey for core from 10.0.0.1 port 59658 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:01.212786 sshd[3162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:01.216323 systemd-logind[1187]: New session 7 of user core.
Dec 13 14:25:01.217400 systemd[1]: Started session-7.scope.
Dec 13 14:25:01.369399 sshd[3162]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:01.371524 systemd[1]: sshd@6-10.0.0.77:22-10.0.0.1:59658.service: Deactivated successfully.
Dec 13 14:25:01.372329 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 14:25:01.373274 systemd-logind[1187]: Session 7 logged out. Waiting for processes to exit.
Dec 13 14:25:01.374178 systemd-logind[1187]: Removed session 7.
Dec 13 14:25:01.757446 kubelet[1935]: E1213 14:25:01.757272    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:25:02.248294 env[1205]: time="2024-12-13T14:25:02.248048583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:25:02.248294 env[1205]: time="2024-12-13T14:25:02.248111511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:25:02.248294 env[1205]: time="2024-12-13T14:25:02.248124836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:25:02.248993 env[1205]: time="2024-12-13T14:25:02.248905552Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8df19c7564c4d44673a1d7087949f38f978756391bcf89aefd551a751a1a6830 pid=3194 runtime=io.containerd.runc.v2
Dec 13 14:25:02.250678 env[1205]: time="2024-12-13T14:25:02.250605925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:25:02.250774 env[1205]: time="2024-12-13T14:25:02.250653644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:25:02.250774 env[1205]: time="2024-12-13T14:25:02.250667480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:25:02.250922 env[1205]: time="2024-12-13T14:25:02.250831738Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ad9928e49551ab6e36d935617c9f97083a4dc8c1ec271372385c98fdd75828e pid=3203 runtime=io.containerd.runc.v2
Dec 13 14:25:02.269044 systemd[1]: Started cri-containerd-8df19c7564c4d44673a1d7087949f38f978756391bcf89aefd551a751a1a6830.scope.
Dec 13 14:25:02.273788 systemd[1]: Started cri-containerd-2ad9928e49551ab6e36d935617c9f97083a4dc8c1ec271372385c98fdd75828e.scope.
Dec 13 14:25:02.285608 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Dec 13 14:25:02.288817 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Dec 13 14:25:02.312963 env[1205]: time="2024-12-13T14:25:02.312907664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6qmrc,Uid:9ecad733-4c89-4bc6-a858-51e46b76ae4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8df19c7564c4d44673a1d7087949f38f978756391bcf89aefd551a751a1a6830\""
Dec 13 14:25:02.313859 kubelet[1935]: E1213 14:25:02.313819    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:25:02.317811 env[1205]: time="2024-12-13T14:25:02.317767428Z" level=info msg="CreateContainer within sandbox \"8df19c7564c4d44673a1d7087949f38f978756391bcf89aefd551a751a1a6830\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 14:25:02.320811 env[1205]: time="2024-12-13T14:25:02.320767583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7454,Uid:801f1ea3-ba1e-49c7-9354-7208b4c702bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad9928e49551ab6e36d935617c9f97083a4dc8c1ec271372385c98fdd75828e\""
Dec 13 14:25:02.322311 kubelet[1935]: E1213 14:25:02.322274    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:25:02.324156 env[1205]: time="2024-12-13T14:25:02.324126149Z" level=info msg="CreateContainer within sandbox \"2ad9928e49551ab6e36d935617c9f97083a4dc8c1ec271372385c98fdd75828e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 14:25:02.348802 env[1205]: time="2024-12-13T14:25:02.348730390Z" level=info msg="CreateContainer within sandbox \"8df19c7564c4d44673a1d7087949f38f978756391bcf89aefd551a751a1a6830\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60f3a49afde2f3e30efc389a336cdfabec22dd91551e6e3b7896b3c8631d2e9c\""
Dec 13 14:25:02.349486 env[1205]: time="2024-12-13T14:25:02.349445122Z" level=info msg="StartContainer for \"60f3a49afde2f3e30efc389a336cdfabec22dd91551e6e3b7896b3c8631d2e9c\""
Dec 13 14:25:02.355856 env[1205]: time="2024-12-13T14:25:02.355809422Z" level=info msg="CreateContainer within sandbox \"2ad9928e49551ab6e36d935617c9f97083a4dc8c1ec271372385c98fdd75828e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cadd70ded872f7c4bc3108248ce9cfae5dc27e17ab7f87e1062458ef064f7b89\""
Dec 13 14:25:02.357352 env[1205]: time="2024-12-13T14:25:02.357314239Z" level=info msg="StartContainer for \"cadd70ded872f7c4bc3108248ce9cfae5dc27e17ab7f87e1062458ef064f7b89\""
Dec 13 14:25:02.363672 systemd[1]: Started cri-containerd-60f3a49afde2f3e30efc389a336cdfabec22dd91551e6e3b7896b3c8631d2e9c.scope.
Dec 13 14:25:02.382047 systemd[1]: Started cri-containerd-cadd70ded872f7c4bc3108248ce9cfae5dc27e17ab7f87e1062458ef064f7b89.scope.
Dec 13 14:25:02.461480 env[1205]: time="2024-12-13T14:25:02.461366034Z" level=info msg="StartContainer for \"cadd70ded872f7c4bc3108248ce9cfae5dc27e17ab7f87e1062458ef064f7b89\" returns successfully"
Dec 13 14:25:02.482083 env[1205]: time="2024-12-13T14:25:02.482001263Z" level=info msg="StartContainer for \"60f3a49afde2f3e30efc389a336cdfabec22dd91551e6e3b7896b3c8631d2e9c\" returns successfully"
Dec 13 14:25:02.759660 kubelet[1935]: E1213 14:25:02.759620    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:25:02.761289 kubelet[1935]: E1213 14:25:02.761253    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:25:02.883473 kubelet[1935]: I1213 14:25:02.883396    1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-b7454" podStartSLOduration=27.883366286 podStartE2EDuration="27.883366286s" podCreationTimestamp="2024-12-13 14:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:02.882827965 +0000 UTC m=+32.737336651" watchObservedRunningTime="2024-12-13 14:25:02.883366286 +0000 UTC m=+32.737874992"
Dec 13 14:25:02.937764 kubelet[1935]: I1213 14:25:02.937693    1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6qmrc" podStartSLOduration=27.937670075 podStartE2EDuration="27.937670075s" podCreationTimestamp="2024-12-13 14:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:02.936661883 +0000 UTC m=+32.791170589" watchObservedRunningTime="2024-12-13 14:25:02.937670075 +0000 UTC m=+32.792178761"
Dec 13 14:25:03.254343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount213730570.mount: Deactivated successfully.
Dec 13 14:25:03.762886 kubelet[1935]: E1213 14:25:03.762829    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:25:03.763779 kubelet[1935]: E1213 14:25:03.763732    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:25:04.764853 kubelet[1935]: E1213 14:25:04.764822    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:25:04.764853 kubelet[1935]: E1213 14:25:04.764822    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:25:06.374168 systemd[1]: Started sshd@7-10.0.0.77:22-10.0.0.1:59672.service.
Dec 13 14:25:06.419968 sshd[3355]: Accepted publickey for core from 10.0.0.1 port 59672 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:06.421369 sshd[3355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:06.425136 systemd-logind[1187]: New session 8 of user core.
Dec 13 14:25:06.425868 systemd[1]: Started session-8.scope.
Dec 13 14:25:06.542225 sshd[3355]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:06.545328 systemd[1]: sshd@7-10.0.0.77:22-10.0.0.1:59672.service: Deactivated successfully.
Dec 13 14:25:06.546294 systemd[1]: session-8.scope: Deactivated successfully.
Dec 13 14:25:06.547208 systemd-logind[1187]: Session 8 logged out. Waiting for processes to exit.
Dec 13 14:25:06.548008 systemd-logind[1187]: Removed session 8.
Dec 13 14:25:11.547252 systemd[1]: Started sshd@8-10.0.0.77:22-10.0.0.1:38964.service.
Dec 13 14:25:11.588819 sshd[3369]: Accepted publickey for core from 10.0.0.1 port 38964 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:11.590122 sshd[3369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:11.594444 systemd-logind[1187]: New session 9 of user core.
Dec 13 14:25:11.595552 systemd[1]: Started session-9.scope.
Dec 13 14:25:11.713301 sshd[3369]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:11.716051 systemd[1]: sshd@8-10.0.0.77:22-10.0.0.1:38964.service: Deactivated successfully.
Dec 13 14:25:11.717087 systemd[1]: session-9.scope: Deactivated successfully.
Dec 13 14:25:11.717632 systemd-logind[1187]: Session 9 logged out. Waiting for processes to exit.
Dec 13 14:25:11.718418 systemd-logind[1187]: Removed session 9.
Dec 13 14:25:16.717349 systemd[1]: Started sshd@9-10.0.0.77:22-10.0.0.1:38966.service.
Dec 13 14:25:16.762374 sshd[3383]: Accepted publickey for core from 10.0.0.1 port 38966 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:16.763677 sshd[3383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:16.768404 systemd-logind[1187]: New session 10 of user core.
Dec 13 14:25:16.769366 systemd[1]: Started session-10.scope.
Dec 13 14:25:16.886329 sshd[3383]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:16.889409 systemd[1]: sshd@9-10.0.0.77:22-10.0.0.1:38966.service: Deactivated successfully.
Dec 13 14:25:16.890324 systemd[1]: session-10.scope: Deactivated successfully.
Dec 13 14:25:16.891371 systemd-logind[1187]: Session 10 logged out. Waiting for processes to exit.
Dec 13 14:25:16.892462 systemd-logind[1187]: Removed session 10.
Dec 13 14:25:21.891260 systemd[1]: Started sshd@10-10.0.0.77:22-10.0.0.1:46768.service.
Dec 13 14:25:21.933632 sshd[3398]: Accepted publickey for core from 10.0.0.1 port 46768 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:21.935073 sshd[3398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:21.938757 systemd-logind[1187]: New session 11 of user core.
Dec 13 14:25:21.939535 systemd[1]: Started session-11.scope.
Dec 13 14:25:22.045488 sshd[3398]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:22.048647 systemd[1]: sshd@10-10.0.0.77:22-10.0.0.1:46768.service: Deactivated successfully.
Dec 13 14:25:22.049346 systemd[1]: session-11.scope: Deactivated successfully.
Dec 13 14:25:22.049898 systemd-logind[1187]: Session 11 logged out. Waiting for processes to exit.
Dec 13 14:25:22.050903 systemd[1]: Started sshd@11-10.0.0.77:22-10.0.0.1:46778.service.
Dec 13 14:25:22.051749 systemd-logind[1187]: Removed session 11.
Dec 13 14:25:22.093668 sshd[3412]: Accepted publickey for core from 10.0.0.1 port 46778 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:22.094962 sshd[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:22.098668 systemd-logind[1187]: New session 12 of user core.
Dec 13 14:25:22.099383 systemd[1]: Started session-12.scope.
Dec 13 14:25:22.261493 sshd[3412]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:22.267022 systemd[1]: Started sshd@12-10.0.0.77:22-10.0.0.1:46790.service.
Dec 13 14:25:22.267671 systemd[1]: sshd@11-10.0.0.77:22-10.0.0.1:46778.service: Deactivated successfully.
Dec 13 14:25:22.269739 systemd[1]: session-12.scope: Deactivated successfully.
Dec 13 14:25:22.271417 systemd-logind[1187]: Session 12 logged out. Waiting for processes to exit.
Dec 13 14:25:22.274251 systemd-logind[1187]: Removed session 12.
Dec 13 14:25:22.319110 sshd[3422]: Accepted publickey for core from 10.0.0.1 port 46790 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:22.320506 sshd[3422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:22.324557 systemd-logind[1187]: New session 13 of user core.
Dec 13 14:25:22.325829 systemd[1]: Started session-13.scope.
Dec 13 14:25:22.447878 sshd[3422]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:22.450140 systemd[1]: sshd@12-10.0.0.77:22-10.0.0.1:46790.service: Deactivated successfully.
Dec 13 14:25:22.450959 systemd[1]: session-13.scope: Deactivated successfully.
Dec 13 14:25:22.451609 systemd-logind[1187]: Session 13 logged out. Waiting for processes to exit.
Dec 13 14:25:22.452422 systemd-logind[1187]: Removed session 13.
Dec 13 14:25:27.452198 systemd[1]: Started sshd@13-10.0.0.77:22-10.0.0.1:46798.service.
Dec 13 14:25:27.492596 sshd[3437]: Accepted publickey for core from 10.0.0.1 port 46798 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:27.493836 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:27.498304 systemd-logind[1187]: New session 14 of user core.
Dec 13 14:25:27.499341 systemd[1]: Started session-14.scope.
Dec 13 14:25:27.606214 sshd[3437]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:27.608922 systemd[1]: sshd@13-10.0.0.77:22-10.0.0.1:46798.service: Deactivated successfully.
Dec 13 14:25:27.609608 systemd[1]: session-14.scope: Deactivated successfully.
Dec 13 14:25:27.610165 systemd-logind[1187]: Session 14 logged out. Waiting for processes to exit.
Dec 13 14:25:27.610883 systemd-logind[1187]: Removed session 14.
Dec 13 14:25:32.612524 systemd[1]: Started sshd@14-10.0.0.77:22-10.0.0.1:36602.service.
Dec 13 14:25:32.655688 sshd[3454]: Accepted publickey for core from 10.0.0.1 port 36602 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:32.657710 sshd[3454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:32.662037 systemd-logind[1187]: New session 15 of user core.
Dec 13 14:25:32.663197 systemd[1]: Started session-15.scope.
Dec 13 14:25:32.798125 sshd[3454]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:32.800759 systemd[1]: sshd@14-10.0.0.77:22-10.0.0.1:36602.service: Deactivated successfully.
Dec 13 14:25:32.801765 systemd[1]: session-15.scope: Deactivated successfully.
Dec 13 14:25:32.802939 systemd-logind[1187]: Session 15 logged out. Waiting for processes to exit.
Dec 13 14:25:32.803715 systemd-logind[1187]: Removed session 15.
Dec 13 14:25:37.801993 systemd[1]: Started sshd@15-10.0.0.77:22-10.0.0.1:36614.service.
Dec 13 14:25:37.906150 sshd[3469]: Accepted publickey for core from 10.0.0.1 port 36614 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:37.907225 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:37.910581 systemd-logind[1187]: New session 16 of user core.
Dec 13 14:25:37.911360 systemd[1]: Started session-16.scope.
Dec 13 14:25:38.038721 sshd[3469]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:38.042118 systemd[1]: sshd@15-10.0.0.77:22-10.0.0.1:36614.service: Deactivated successfully.
Dec 13 14:25:38.042648 systemd[1]: session-16.scope: Deactivated successfully.
Dec 13 14:25:38.043153 systemd-logind[1187]: Session 16 logged out. Waiting for processes to exit.
Dec 13 14:25:38.044213 systemd[1]: Started sshd@16-10.0.0.77:22-10.0.0.1:39386.service.
Dec 13 14:25:38.044919 systemd-logind[1187]: Removed session 16.
Dec 13 14:25:38.085511 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 39386 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:38.086780 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:38.090388 systemd-logind[1187]: New session 17 of user core.
Dec 13 14:25:38.091368 systemd[1]: Started session-17.scope.
Dec 13 14:25:38.725800 sshd[3482]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:38.728951 systemd[1]: Started sshd@17-10.0.0.77:22-10.0.0.1:39392.service.
Dec 13 14:25:38.729434 systemd[1]: sshd@16-10.0.0.77:22-10.0.0.1:39386.service: Deactivated successfully.
Dec 13 14:25:38.730018 systemd[1]: session-17.scope: Deactivated successfully.
Dec 13 14:25:38.730651 systemd-logind[1187]: Session 17 logged out. Waiting for processes to exit.
Dec 13 14:25:38.731556 systemd-logind[1187]: Removed session 17.
Dec 13 14:25:38.774944 sshd[3492]: Accepted publickey for core from 10.0.0.1 port 39392 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:38.776504 sshd[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:38.780412 systemd-logind[1187]: New session 18 of user core.
Dec 13 14:25:38.781190 systemd[1]: Started session-18.scope.
Dec 13 14:25:40.173138 sshd[3492]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:40.175004 systemd[1]: Started sshd@18-10.0.0.77:22-10.0.0.1:39398.service.
Dec 13 14:25:40.178424 systemd[1]: sshd@17-10.0.0.77:22-10.0.0.1:39392.service: Deactivated successfully.
Dec 13 14:25:40.179237 systemd[1]: session-18.scope: Deactivated successfully.
Dec 13 14:25:40.179791 systemd-logind[1187]: Session 18 logged out. Waiting for processes to exit.
Dec 13 14:25:40.183115 systemd-logind[1187]: Removed session 18.
Dec 13 14:25:40.219290 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 39398 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:40.220788 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:40.224676 systemd-logind[1187]: New session 19 of user core.
Dec 13 14:25:40.225802 systemd[1]: Started session-19.scope.
Dec 13 14:25:40.489285 sshd[3510]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:40.492855 systemd[1]: Started sshd@19-10.0.0.77:22-10.0.0.1:39402.service.
Dec 13 14:25:40.495117 systemd[1]: sshd@18-10.0.0.77:22-10.0.0.1:39398.service: Deactivated successfully.
Dec 13 14:25:40.495661 systemd[1]: session-19.scope: Deactivated successfully.
Dec 13 14:25:40.496659 systemd-logind[1187]: Session 19 logged out. Waiting for processes to exit.
Dec 13 14:25:40.497643 systemd-logind[1187]: Removed session 19.
Dec 13 14:25:40.534563 sshd[3522]: Accepted publickey for core from 10.0.0.1 port 39402 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:40.535949 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:40.539697 systemd-logind[1187]: New session 20 of user core.
Dec 13 14:25:40.540488 systemd[1]: Started session-20.scope.
Dec 13 14:25:40.705815 sshd[3522]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:40.707963 systemd[1]: sshd@19-10.0.0.77:22-10.0.0.1:39402.service: Deactivated successfully.
Dec 13 14:25:40.708678 systemd[1]: session-20.scope: Deactivated successfully.
Dec 13 14:25:40.709268 systemd-logind[1187]: Session 20 logged out. Waiting for processes to exit.
Dec 13 14:25:40.709947 systemd-logind[1187]: Removed session 20.
Dec 13 14:25:45.712007 systemd[1]: Started sshd@20-10.0.0.77:22-10.0.0.1:39406.service.
Dec 13 14:25:45.756155 sshd[3537]: Accepted publickey for core from 10.0.0.1 port 39406 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:45.757837 sshd[3537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:45.762387 systemd-logind[1187]: New session 21 of user core.
Dec 13 14:25:45.763257 systemd[1]: Started session-21.scope.
Dec 13 14:25:45.870519 sshd[3537]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:45.872535 systemd[1]: sshd@20-10.0.0.77:22-10.0.0.1:39406.service: Deactivated successfully.
Dec 13 14:25:45.873189 systemd[1]: session-21.scope: Deactivated successfully.
Dec 13 14:25:45.873862 systemd-logind[1187]: Session 21 logged out. Waiting for processes to exit.
Dec 13 14:25:45.874491 systemd-logind[1187]: Removed session 21.
Dec 13 14:25:50.876244 systemd[1]: Started sshd@21-10.0.0.77:22-10.0.0.1:43844.service.
Dec 13 14:25:50.919752 sshd[3553]: Accepted publickey for core from 10.0.0.1 port 43844 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:50.921125 sshd[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:50.925050 systemd-logind[1187]: New session 22 of user core.
Dec 13 14:25:50.926048 systemd[1]: Started session-22.scope.
Dec 13 14:25:51.030234 sshd[3553]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:51.032790 systemd[1]: sshd@21-10.0.0.77:22-10.0.0.1:43844.service: Deactivated successfully.
Dec 13 14:25:51.033550 systemd[1]: session-22.scope: Deactivated successfully.
Dec 13 14:25:51.034290 systemd-logind[1187]: Session 22 logged out. Waiting for processes to exit.
Dec 13 14:25:51.035155 systemd-logind[1187]: Removed session 22.
Dec 13 14:25:55.243963 kubelet[1935]: E1213 14:25:55.243901    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:25:56.035296 systemd[1]: Started sshd@22-10.0.0.77:22-10.0.0.1:43856.service.
Dec 13 14:25:56.076478 sshd[3568]: Accepted publickey for core from 10.0.0.1 port 43856 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:25:56.078052 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:56.082401 systemd-logind[1187]: New session 23 of user core.
Dec 13 14:25:56.083284 systemd[1]: Started session-23.scope.
Dec 13 14:25:56.187446 sshd[3568]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:56.189882 systemd[1]: sshd@22-10.0.0.77:22-10.0.0.1:43856.service: Deactivated successfully.
Dec 13 14:25:56.190585 systemd[1]: session-23.scope: Deactivated successfully.
Dec 13 14:25:56.191135 systemd-logind[1187]: Session 23 logged out. Waiting for processes to exit.
Dec 13 14:25:56.191745 systemd-logind[1187]: Removed session 23.
Dec 13 14:25:59.243662 kubelet[1935]: E1213 14:25:59.243612    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:01.191990 systemd[1]: Started sshd@23-10.0.0.77:22-10.0.0.1:49042.service.
Dec 13 14:26:01.234444 sshd[3581]: Accepted publickey for core from 10.0.0.1 port 49042 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:26:01.235883 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:01.239573 systemd-logind[1187]: New session 24 of user core.
Dec 13 14:26:01.240507 systemd[1]: Started session-24.scope.
Dec 13 14:26:01.345178 sshd[3581]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:01.348408 systemd[1]: sshd@23-10.0.0.77:22-10.0.0.1:49042.service: Deactivated successfully.
Dec 13 14:26:01.348974 systemd[1]: session-24.scope: Deactivated successfully.
Dec 13 14:26:01.350119 systemd-logind[1187]: Session 24 logged out. Waiting for processes to exit.
Dec 13 14:26:01.351363 systemd[1]: Started sshd@24-10.0.0.77:22-10.0.0.1:49058.service.
Dec 13 14:26:01.352175 systemd-logind[1187]: Removed session 24.
Dec 13 14:26:01.393960 sshd[3594]: Accepted publickey for core from 10.0.0.1 port 49058 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:26:01.395190 sshd[3594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:01.398789 systemd-logind[1187]: New session 25 of user core.
Dec 13 14:26:01.399721 systemd[1]: Started session-25.scope.
Dec 13 14:26:03.335650 env[1205]: time="2024-12-13T14:26:03.335567989Z" level=info msg="StopContainer for \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\" with timeout 30 (s)"
Dec 13 14:26:03.336392 env[1205]: time="2024-12-13T14:26:03.336050643Z" level=info msg="Stop container \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\" with signal terminated"
Dec 13 14:26:03.340194 env[1205]: time="2024-12-13T14:26:03.340128532Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 14:26:03.345540 env[1205]: time="2024-12-13T14:26:03.345493246Z" level=info msg="StopContainer for \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\" with timeout 2 (s)"
Dec 13 14:26:03.345813 env[1205]: time="2024-12-13T14:26:03.345755323Z" level=info msg="Stop container \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\" with signal terminated"
Dec 13 14:26:03.350269 systemd[1]: cri-containerd-cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260.scope: Deactivated successfully.
Dec 13 14:26:03.353739 systemd-networkd[1032]: lxc_health: Link DOWN
Dec 13 14:26:03.353751 systemd-networkd[1032]: lxc_health: Lost carrier
Dec 13 14:26:03.370064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260-rootfs.mount: Deactivated successfully.
Dec 13 14:26:03.385434 systemd[1]: cri-containerd-12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581.scope: Deactivated successfully.
Dec 13 14:26:03.385720 systemd[1]: cri-containerd-12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581.scope: Consumed 6.917s CPU time.
Dec 13 14:26:03.404390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581-rootfs.mount: Deactivated successfully.
Dec 13 14:26:03.508403 env[1205]: time="2024-12-13T14:26:03.508343937Z" level=info msg="shim disconnected" id=cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260
Dec 13 14:26:03.508403 env[1205]: time="2024-12-13T14:26:03.508339858Z" level=info msg="shim disconnected" id=12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581
Dec 13 14:26:03.508403 env[1205]: time="2024-12-13T14:26:03.508395574Z" level=warning msg="cleaning up after shim disconnected" id=cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260 namespace=k8s.io
Dec 13 14:26:03.508403 env[1205]: time="2024-12-13T14:26:03.508405253Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:03.508403 env[1205]: time="2024-12-13T14:26:03.508407978Z" level=warning msg="cleaning up after shim disconnected" id=12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581 namespace=k8s.io
Dec 13 14:26:03.508403 env[1205]: time="2024-12-13T14:26:03.508419129Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:03.515531 env[1205]: time="2024-12-13T14:26:03.515469213Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3661 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:03.517997 env[1205]: time="2024-12-13T14:26:03.517941724Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3662 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:03.581178 env[1205]: time="2024-12-13T14:26:03.581112025Z" level=info msg="StopContainer for \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\" returns successfully"
Dec 13 14:26:03.581381 env[1205]: time="2024-12-13T14:26:03.581110492Z" level=info msg="StopContainer for \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\" returns successfully"
Dec 13 14:26:03.581805 env[1205]: time="2024-12-13T14:26:03.581760632Z" level=info msg="StopPodSandbox for \"de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766\""
Dec 13 14:26:03.581805 env[1205]: time="2024-12-13T14:26:03.581798675Z" level=info msg="StopPodSandbox for \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\""
Dec 13 14:26:03.581926 env[1205]: time="2024-12-13T14:26:03.581851354Z" level=info msg="Container to stop \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:03.581926 env[1205]: time="2024-12-13T14:26:03.581855191Z" level=info msg="Container to stop \"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:03.581926 env[1205]: time="2024-12-13T14:26:03.581877805Z" level=info msg="Container to stop \"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:03.581926 env[1205]: time="2024-12-13T14:26:03.581892573Z" level=info msg="Container to stop \"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:03.581926 env[1205]: time="2024-12-13T14:26:03.581906679Z" level=info msg="Container to stop \"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:03.581926 env[1205]: time="2024-12-13T14:26:03.581923060Z" level=info msg="Container to stop \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:03.584338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766-shm.mount: Deactivated successfully.
Dec 13 14:26:03.586414 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92-shm.mount: Deactivated successfully.
Dec 13 14:26:03.587775 systemd[1]: cri-containerd-de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766.scope: Deactivated successfully.
Dec 13 14:26:03.599611 systemd[1]: cri-containerd-ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92.scope: Deactivated successfully.
Dec 13 14:26:03.714531 env[1205]: time="2024-12-13T14:26:03.714430614Z" level=info msg="shim disconnected" id=de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766
Dec 13 14:26:03.714531 env[1205]: time="2024-12-13T14:26:03.714478545Z" level=warning msg="cleaning up after shim disconnected" id=de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766 namespace=k8s.io
Dec 13 14:26:03.714531 env[1205]: time="2024-12-13T14:26:03.714486851Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:03.715253 env[1205]: time="2024-12-13T14:26:03.715187858Z" level=info msg="shim disconnected" id=ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92
Dec 13 14:26:03.715312 env[1205]: time="2024-12-13T14:26:03.715257931Z" level=warning msg="cleaning up after shim disconnected" id=ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92 namespace=k8s.io
Dec 13 14:26:03.715312 env[1205]: time="2024-12-13T14:26:03.715273811Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:03.722249 env[1205]: time="2024-12-13T14:26:03.722189952Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3722 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:03.722513 env[1205]: time="2024-12-13T14:26:03.722463840Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3723 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:03.728940 env[1205]: time="2024-12-13T14:26:03.728890014Z" level=info msg="TearDown network for sandbox \"de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766\" successfully"
Dec 13 14:26:03.728940 env[1205]: time="2024-12-13T14:26:03.728936352Z" level=info msg="StopPodSandbox for \"de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766\" returns successfully"
Dec 13 14:26:03.729024 env[1205]: time="2024-12-13T14:26:03.728929619Z" level=info msg="TearDown network for sandbox \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" successfully"
Dec 13 14:26:03.729024 env[1205]: time="2024-12-13T14:26:03.728963113Z" level=info msg="StopPodSandbox for \"ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92\" returns successfully"
Dec 13 14:26:03.882849 kubelet[1935]: I1213 14:26:03.882726    1935 scope.go:117] "RemoveContainer" containerID="12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581"
Dec 13 14:26:03.884579 env[1205]: time="2024-12-13T14:26:03.884546086Z" level=info msg="RemoveContainer for \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\""
Dec 13 14:26:03.894274 kubelet[1935]: I1213 14:26:03.894220    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a00218d7-4562-41ff-a855-272aed7c022c-cilium-config-path\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894274 kubelet[1935]: I1213 14:26:03.894266    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a00218d7-4562-41ff-a855-272aed7c022c-clustermesh-secrets\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894425 kubelet[1935]: I1213 14:26:03.894292    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-xtables-lock\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894425 kubelet[1935]: I1213 14:26:03.894314    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-bpf-maps\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894425 kubelet[1935]: I1213 14:26:03.894336    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cilium-run\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894425 kubelet[1935]: I1213 14:26:03.894357    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cilium-cgroup\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894425 kubelet[1935]: I1213 14:26:03.894383    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tnf5\" (UniqueName: \"kubernetes.io/projected/a00218d7-4562-41ff-a855-272aed7c022c-kube-api-access-8tnf5\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894425 kubelet[1935]: I1213 14:26:03.894406    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a00218d7-4562-41ff-a855-272aed7c022c-hubble-tls\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894633 kubelet[1935]: I1213 14:26:03.894424    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-lib-modules\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894633 kubelet[1935]: I1213 14:26:03.894444    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-host-proc-sys-kernel\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894633 kubelet[1935]: I1213 14:26:03.894464    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-hostproc\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894633 kubelet[1935]: I1213 14:26:03.894486    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa8cb5b9-f1a7-44a6-a42a-8976761947ad-cilium-config-path\") pod \"aa8cb5b9-f1a7-44a6-a42a-8976761947ad\" (UID: \"aa8cb5b9-f1a7-44a6-a42a-8976761947ad\") "
Dec 13 14:26:03.894633 kubelet[1935]: I1213 14:26:03.894503    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-etc-cni-netd\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894633 kubelet[1935]: I1213 14:26:03.894523    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cni-path\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.894850 kubelet[1935]: I1213 14:26:03.894551    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmplf\" (UniqueName: \"kubernetes.io/projected/aa8cb5b9-f1a7-44a6-a42a-8976761947ad-kube-api-access-fmplf\") pod \"aa8cb5b9-f1a7-44a6-a42a-8976761947ad\" (UID: \"aa8cb5b9-f1a7-44a6-a42a-8976761947ad\") "
Dec 13 14:26:03.894850 kubelet[1935]: I1213 14:26:03.894571    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-host-proc-sys-net\") pod \"a00218d7-4562-41ff-a855-272aed7c022c\" (UID: \"a00218d7-4562-41ff-a855-272aed7c022c\") "
Dec 13 14:26:03.920665 kubelet[1935]: I1213 14:26:03.918134    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:03.920665 kubelet[1935]: I1213 14:26:03.918156    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:03.920665 kubelet[1935]: I1213 14:26:03.918209    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00218d7-4562-41ff-a855-272aed7c022c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 14:26:03.920665 kubelet[1935]: I1213 14:26:03.918223    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-hostproc" (OuterVolumeSpecName: "hostproc") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:03.920665 kubelet[1935]: I1213 14:26:03.918167    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00218d7-4562-41ff-a855-272aed7c022c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 14:26:03.921038 kubelet[1935]: I1213 14:26:03.918264    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:03.921038 kubelet[1935]: I1213 14:26:03.918296    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cni-path" (OuterVolumeSpecName: "cni-path") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:03.921038 kubelet[1935]: I1213 14:26:03.918204    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:03.921038 kubelet[1935]: I1213 14:26:03.918320    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:03.921038 kubelet[1935]: I1213 14:26:03.920608    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa8cb5b9-f1a7-44a6-a42a-8976761947ad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aa8cb5b9-f1a7-44a6-a42a-8976761947ad" (UID: "aa8cb5b9-f1a7-44a6-a42a-8976761947ad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 14:26:03.921216 kubelet[1935]: I1213 14:26:03.920661    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:03.921216 kubelet[1935]: I1213 14:26:03.920688    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:03.921216 kubelet[1935]: I1213 14:26:03.920707    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:03.921216 kubelet[1935]: I1213 14:26:03.921152    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00218d7-4562-41ff-a855-272aed7c022c-kube-api-access-8tnf5" (OuterVolumeSpecName: "kube-api-access-8tnf5") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "kube-api-access-8tnf5". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 14:26:03.921486 kubelet[1935]: I1213 14:26:03.921447    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa8cb5b9-f1a7-44a6-a42a-8976761947ad-kube-api-access-fmplf" (OuterVolumeSpecName: "kube-api-access-fmplf") pod "aa8cb5b9-f1a7-44a6-a42a-8976761947ad" (UID: "aa8cb5b9-f1a7-44a6-a42a-8976761947ad"). InnerVolumeSpecName "kube-api-access-fmplf". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 14:26:03.921853 kubelet[1935]: I1213 14:26:03.921800    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00218d7-4562-41ff-a855-272aed7c022c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a00218d7-4562-41ff-a855-272aed7c022c" (UID: "a00218d7-4562-41ff-a855-272aed7c022c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 14:26:03.994788 kubelet[1935]: I1213 14:26:03.994711    1935 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a00218d7-4562-41ff-a855-272aed7c022c-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.994788 kubelet[1935]: I1213 14:26:03.994782    1935 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a00218d7-4562-41ff-a855-272aed7c022c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.994788 kubelet[1935]: I1213 14:26:03.994793    1935 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-xtables-lock\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.994788 kubelet[1935]: I1213 14:26:03.994801    1935 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cilium-cgroup\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.994788 kubelet[1935]: I1213 14:26:03.994812    1935 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-bpf-maps\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.994788 kubelet[1935]: I1213 14:26:03.994819    1935 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cilium-run\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.995170 kubelet[1935]: I1213 14:26:03.994827    1935 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a00218d7-4562-41ff-a855-272aed7c022c-hubble-tls\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.995170 kubelet[1935]: I1213 14:26:03.994834    1935 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8tnf5\" (UniqueName: \"kubernetes.io/projected/a00218d7-4562-41ff-a855-272aed7c022c-kube-api-access-8tnf5\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.995170 kubelet[1935]: I1213 14:26:03.994841    1935 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa8cb5b9-f1a7-44a6-a42a-8976761947ad-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.995170 kubelet[1935]: I1213 14:26:03.994847    1935 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-lib-modules\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.995170 kubelet[1935]: I1213 14:26:03.994855    1935 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.995170 kubelet[1935]: I1213 14:26:03.994862    1935 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-hostproc\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.995170 kubelet[1935]: I1213 14:26:03.994868    1935 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-cni-path\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.995170 kubelet[1935]: I1213 14:26:03.994874    1935 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-etc-cni-netd\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.995354 kubelet[1935]: I1213 14:26:03.994880    1935 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fmplf\" (UniqueName: \"kubernetes.io/projected/aa8cb5b9-f1a7-44a6-a42a-8976761947ad-kube-api-access-fmplf\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:03.995354 kubelet[1935]: I1213 14:26:03.994887    1935 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a00218d7-4562-41ff-a855-272aed7c022c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:04.014474 env[1205]: time="2024-12-13T14:26:04.014393742Z" level=info msg="RemoveContainer for \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\" returns successfully"
Dec 13 14:26:04.014930 kubelet[1935]: I1213 14:26:04.014881    1935 scope.go:117] "RemoveContainer" containerID="4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21"
Dec 13 14:26:04.016232 env[1205]: time="2024-12-13T14:26:04.016205762Z" level=info msg="RemoveContainer for \"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21\""
Dec 13 14:26:04.072372 env[1205]: time="2024-12-13T14:26:04.072292785Z" level=info msg="RemoveContainer for \"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21\" returns successfully"
Dec 13 14:26:04.072682 kubelet[1935]: I1213 14:26:04.072641    1935 scope.go:117] "RemoveContainer" containerID="f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052"
Dec 13 14:26:04.074042 env[1205]: time="2024-12-13T14:26:04.074000997Z" level=info msg="RemoveContainer for \"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052\""
Dec 13 14:26:04.118433 env[1205]: time="2024-12-13T14:26:04.118360426Z" level=info msg="RemoveContainer for \"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052\" returns successfully"
Dec 13 14:26:04.118685 kubelet[1935]: I1213 14:26:04.118658    1935 scope.go:117] "RemoveContainer" containerID="96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded"
Dec 13 14:26:04.119745 env[1205]: time="2024-12-13T14:26:04.119719659Z" level=info msg="RemoveContainer for \"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded\""
Dec 13 14:26:04.139916 env[1205]: time="2024-12-13T14:26:04.139673300Z" level=info msg="RemoveContainer for \"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded\" returns successfully"
Dec 13 14:26:04.140238 kubelet[1935]: I1213 14:26:04.140047    1935 scope.go:117] "RemoveContainer" containerID="c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3"
Dec 13 14:26:04.141954 env[1205]: time="2024-12-13T14:26:04.141912698Z" level=info msg="RemoveContainer for \"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3\""
Dec 13 14:26:04.155283 env[1205]: time="2024-12-13T14:26:04.155204783Z" level=info msg="RemoveContainer for \"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3\" returns successfully"
Dec 13 14:26:04.155573 kubelet[1935]: I1213 14:26:04.155529    1935 scope.go:117] "RemoveContainer" containerID="12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581"
Dec 13 14:26:04.156074 env[1205]: time="2024-12-13T14:26:04.155938992Z" level=error msg="ContainerStatus for \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\": not found"
Dec 13 14:26:04.156225 kubelet[1935]: E1213 14:26:04.156202    1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\": not found" containerID="12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581"
Dec 13 14:26:04.156334 kubelet[1935]: I1213 14:26:04.156236    1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581"} err="failed to get container status \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\": rpc error: code = NotFound desc = an error occurred when try to find container \"12d5588e6b8ba466a9ef86c49413a7d361a97138c7b32671dc7e550d25d32581\": not found"
Dec 13 14:26:04.156369 kubelet[1935]: I1213 14:26:04.156334    1935 scope.go:117] "RemoveContainer" containerID="4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21"
Dec 13 14:26:04.156558 env[1205]: time="2024-12-13T14:26:04.156509562Z" level=error msg="ContainerStatus for \"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21\": not found"
Dec 13 14:26:04.156724 kubelet[1935]: E1213 14:26:04.156698    1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21\": not found" containerID="4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21"
Dec 13 14:26:04.156823 kubelet[1935]: I1213 14:26:04.156724    1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21"} err="failed to get container status \"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c7509b9b9b1525fdb1f6df46eb97fd48db95166b82ebaf794d0aeb306544e21\": not found"
Dec 13 14:26:04.156823 kubelet[1935]: I1213 14:26:04.156740    1935 scope.go:117] "RemoveContainer" containerID="f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052"
Dec 13 14:26:04.157081 env[1205]: time="2024-12-13T14:26:04.156971035Z" level=error msg="ContainerStatus for \"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052\": not found"
Dec 13 14:26:04.157170 kubelet[1935]: E1213 14:26:04.157145    1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052\": not found" containerID="f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052"
Dec 13 14:26:04.157253 kubelet[1935]: I1213 14:26:04.157169    1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052"} err="failed to get container status \"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7fc8f8e1b6fd0ef29cf558d6201d0b339fecf69a91566698f9130111a7be052\": not found"
Dec 13 14:26:04.157253 kubelet[1935]: I1213 14:26:04.157183    1935 scope.go:117] "RemoveContainer" containerID="96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded"
Dec 13 14:26:04.157591 env[1205]: time="2024-12-13T14:26:04.157486521Z" level=error msg="ContainerStatus for \"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded\": not found"
Dec 13 14:26:04.158087 kubelet[1935]: E1213 14:26:04.158016    1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded\": not found" containerID="96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded"
Dec 13 14:26:04.158280 kubelet[1935]: I1213 14:26:04.158121    1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded"} err="failed to get container status \"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded\": rpc error: code = NotFound desc = an error occurred when try to find container \"96bd3f8a7cb9e92651d21821eb530c5d2bb6b93f5d52974305ccf24b90fefded\": not found"
Dec 13 14:26:04.158280 kubelet[1935]: I1213 14:26:04.158162    1935 scope.go:117] "RemoveContainer" containerID="c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3"
Dec 13 14:26:04.158883 env[1205]: time="2024-12-13T14:26:04.158787643Z" level=error msg="ContainerStatus for \"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3\": not found"
Dec 13 14:26:04.159021 kubelet[1935]: E1213 14:26:04.158998    1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3\": not found" containerID="c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3"
Dec 13 14:26:04.159103 kubelet[1935]: I1213 14:26:04.159021    1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3"} err="failed to get container status \"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4ef7c348950014f77a0b9a077636d8ee17b06b2eb99357cc97ceb5daa5180e3\": not found"
Dec 13 14:26:04.159103 kubelet[1935]: I1213 14:26:04.159038    1935 scope.go:117] "RemoveContainer" containerID="cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260"
Dec 13 14:26:04.160126 env[1205]: time="2024-12-13T14:26:04.160093304Z" level=info msg="RemoveContainer for \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\""
Dec 13 14:26:04.174601 env[1205]: time="2024-12-13T14:26:04.174543059Z" level=info msg="RemoveContainer for \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\" returns successfully"
Dec 13 14:26:04.174939 kubelet[1935]: I1213 14:26:04.174900    1935 scope.go:117] "RemoveContainer" containerID="cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260"
Dec 13 14:26:04.175359 env[1205]: time="2024-12-13T14:26:04.175267361Z" level=error msg="ContainerStatus for \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\": not found"
Dec 13 14:26:04.175491 kubelet[1935]: E1213 14:26:04.175465    1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\": not found" containerID="cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260"
Dec 13 14:26:04.175546 kubelet[1935]: I1213 14:26:04.175497    1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260"} err="failed to get container status \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdae701781b74d8adf0b13a493b6b5ebe15d2cb18d157ec29c13a9f2b1d48260\": not found"
Dec 13 14:26:04.187836 systemd[1]: Removed slice kubepods-burstable-poda00218d7_4562_41ff_a855_272aed7c022c.slice.
Dec 13 14:26:04.187938 systemd[1]: kubepods-burstable-poda00218d7_4562_41ff_a855_272aed7c022c.slice: Consumed 7.012s CPU time.
Dec 13 14:26:04.189328 systemd[1]: Removed slice kubepods-besteffort-podaa8cb5b9_f1a7_44a6_a42a_8976761947ad.slice.
Dec 13 14:26:04.316787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebc25bdbdbabf540dabc80fc67cf66c9faeeaf3b1d367210ac28fc0d05e13d92-rootfs.mount: Deactivated successfully.
Dec 13 14:26:04.316914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de0893e378159ed42db14a90d38b92f34eb525a49c0bb80aa64471b01570a766-rootfs.mount: Deactivated successfully.
Dec 13 14:26:04.316980 systemd[1]: var-lib-kubelet-pods-a00218d7\x2d4562\x2d41ff\x2da855\x2d272aed7c022c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Dec 13 14:26:04.317051 systemd[1]: var-lib-kubelet-pods-a00218d7\x2d4562\x2d41ff\x2da855\x2d272aed7c022c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Dec 13 14:26:04.317149 systemd[1]: var-lib-kubelet-pods-a00218d7\x2d4562\x2d41ff\x2da855\x2d272aed7c022c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8tnf5.mount: Deactivated successfully.
Dec 13 14:26:04.317225 systemd[1]: var-lib-kubelet-pods-aa8cb5b9\x2df1a7\x2d44a6\x2da42a\x2d8976761947ad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfmplf.mount: Deactivated successfully.
Dec 13 14:26:05.072099 sshd[3594]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:05.074967 systemd[1]: sshd@24-10.0.0.77:22-10.0.0.1:49058.service: Deactivated successfully.
Dec 13 14:26:05.075572 systemd[1]: session-25.scope: Deactivated successfully.
Dec 13 14:26:05.076133 systemd-logind[1187]: Session 25 logged out. Waiting for processes to exit.
Dec 13 14:26:05.077187 systemd[1]: Started sshd@25-10.0.0.77:22-10.0.0.1:49068.service.
Dec 13 14:26:05.078004 systemd-logind[1187]: Removed session 25.
Dec 13 14:26:05.118131 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 49068 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:26:05.119372 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:05.122707 systemd-logind[1187]: New session 26 of user core.
Dec 13 14:26:05.123535 systemd[1]: Started session-26.scope.
Dec 13 14:26:05.296600 kubelet[1935]: E1213 14:26:05.296542    1935 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 13 14:26:06.246370 kubelet[1935]: I1213 14:26:06.246315    1935 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00218d7-4562-41ff-a855-272aed7c022c" path="/var/lib/kubelet/pods/a00218d7-4562-41ff-a855-272aed7c022c/volumes"
Dec 13 14:26:06.246983 kubelet[1935]: I1213 14:26:06.246950    1935 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa8cb5b9-f1a7-44a6-a42a-8976761947ad" path="/var/lib/kubelet/pods/aa8cb5b9-f1a7-44a6-a42a-8976761947ad/volumes"
Dec 13 14:26:06.294216 sshd[3756]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:06.298562 systemd[1]: Started sshd@26-10.0.0.77:22-10.0.0.1:49082.service.
Dec 13 14:26:06.299053 systemd[1]: sshd@25-10.0.0.77:22-10.0.0.1:49068.service: Deactivated successfully.
Dec 13 14:26:06.299700 systemd[1]: session-26.scope: Deactivated successfully.
Dec 13 14:26:06.300366 systemd-logind[1187]: Session 26 logged out. Waiting for processes to exit.
Dec 13 14:26:06.301213 systemd-logind[1187]: Removed session 26.
Dec 13 14:26:06.339320 sshd[3767]: Accepted publickey for core from 10.0.0.1 port 49082 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:26:06.340504 sshd[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:06.343854 systemd-logind[1187]: New session 27 of user core.
Dec 13 14:26:06.344651 systemd[1]: Started session-27.scope.
Dec 13 14:26:06.429334 kubelet[1935]: E1213 14:26:06.429288    1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a00218d7-4562-41ff-a855-272aed7c022c" containerName="cilium-agent"
Dec 13 14:26:06.429334 kubelet[1935]: E1213 14:26:06.429329    1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a00218d7-4562-41ff-a855-272aed7c022c" containerName="mount-cgroup"
Dec 13 14:26:06.429334 kubelet[1935]: E1213 14:26:06.429340    1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a00218d7-4562-41ff-a855-272aed7c022c" containerName="apply-sysctl-overwrites"
Dec 13 14:26:06.429334 kubelet[1935]: E1213 14:26:06.429347    1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a00218d7-4562-41ff-a855-272aed7c022c" containerName="mount-bpf-fs"
Dec 13 14:26:06.429910 kubelet[1935]: E1213 14:26:06.429354    1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a00218d7-4562-41ff-a855-272aed7c022c" containerName="clean-cilium-state"
Dec 13 14:26:06.429910 kubelet[1935]: E1213 14:26:06.429361    1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa8cb5b9-f1a7-44a6-a42a-8976761947ad" containerName="cilium-operator"
Dec 13 14:26:06.429910 kubelet[1935]: I1213 14:26:06.429386    1935 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa8cb5b9-f1a7-44a6-a42a-8976761947ad" containerName="cilium-operator"
Dec 13 14:26:06.429910 kubelet[1935]: I1213 14:26:06.429392    1935 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00218d7-4562-41ff-a855-272aed7c022c" containerName="cilium-agent"
Dec 13 14:26:06.435802 systemd[1]: Created slice kubepods-burstable-pode868b592_5424_4edd_ac10_f805e9d87da3.slice.
Dec 13 14:26:06.499540 sshd[3767]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:06.504896 systemd[1]: Started sshd@27-10.0.0.77:22-10.0.0.1:49092.service.
Dec 13 14:26:06.507460 systemd[1]: sshd@26-10.0.0.77:22-10.0.0.1:49082.service: Deactivated successfully.
Dec 13 14:26:06.508699 systemd[1]: session-27.scope: Deactivated successfully.
Dec 13 14:26:06.509436 systemd-logind[1187]: Session 27 logged out. Waiting for processes to exit.
Dec 13 14:26:06.510662 systemd-logind[1187]: Removed session 27.
Dec 13 14:26:06.530114 kubelet[1935]: E1213 14:26:06.529997    1935 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-24t5m lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-2gg5c" podUID="e868b592-5424-4edd-ac10-f805e9d87da3"
Dec 13 14:26:06.548214 sshd[3783]: Accepted publickey for core from 10.0.0.1 port 49092 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg
Dec 13 14:26:06.549470 sshd[3783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:06.554544 systemd[1]: Started session-28.scope.
Dec 13 14:26:06.555106 systemd-logind[1187]: New session 28 of user core.
Dec 13 14:26:06.611654 kubelet[1935]: I1213 14:26:06.611589    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-host-proc-sys-net\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.611654 kubelet[1935]: I1213 14:26:06.611627    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-cgroup\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.611654 kubelet[1935]: I1213 14:26:06.611646    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-xtables-lock\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.611654 kubelet[1935]: I1213 14:26:06.611661    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-ipsec-secrets\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.611654 kubelet[1935]: I1213 14:26:06.611674    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e868b592-5424-4edd-ac10-f805e9d87da3-hubble-tls\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.611965 kubelet[1935]: I1213 14:26:06.611692    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24t5m\" (UniqueName: \"kubernetes.io/projected/e868b592-5424-4edd-ac10-f805e9d87da3-kube-api-access-24t5m\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.611965 kubelet[1935]: I1213 14:26:06.611779    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cni-path\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.611965 kubelet[1935]: I1213 14:26:06.611822    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-config-path\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.611965 kubelet[1935]: I1213 14:26:06.611848    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-bpf-maps\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.611965 kubelet[1935]: I1213 14:26:06.611863    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-lib-modules\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.611965 kubelet[1935]: I1213 14:26:06.611878    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-etc-cni-netd\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.612118 kubelet[1935]: I1213 14:26:06.611896    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-host-proc-sys-kernel\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.612118 kubelet[1935]: I1213 14:26:06.611911    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-run\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.612118 kubelet[1935]: I1213 14:26:06.611925    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-hostproc\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:06.612118 kubelet[1935]: I1213 14:26:06.611939    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e868b592-5424-4edd-ac10-f805e9d87da3-clustermesh-secrets\") pod \"cilium-2gg5c\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") " pod="kube-system/cilium-2gg5c"
Dec 13 14:26:07.014019 kubelet[1935]: I1213 14:26:07.013924    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-etc-cni-netd\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014019 kubelet[1935]: I1213 14:26:07.013998    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-bpf-maps\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014019 kubelet[1935]: I1213 14:26:07.014021    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-hostproc\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014317 kubelet[1935]: I1213 14:26:07.014040    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-cgroup\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014317 kubelet[1935]: I1213 14:26:07.014091    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-host-proc-sys-kernel\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014317 kubelet[1935]: I1213 14:26:07.014113    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-run\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014317 kubelet[1935]: I1213 14:26:07.014132    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-xtables-lock\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014317 kubelet[1935]: I1213 14:26:07.014121    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-hostproc" (OuterVolumeSpecName: "hostproc") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:07.014317 kubelet[1935]: I1213 14:26:07.014165    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-ipsec-secrets\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014457 kubelet[1935]: I1213 14:26:07.014185    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cni-path\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014457 kubelet[1935]: I1213 14:26:07.014193    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:07.014457 kubelet[1935]: I1213 14:26:07.014210    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:07.014457 kubelet[1935]: I1213 14:26:07.014213    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-config-path\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014457 kubelet[1935]: I1213 14:26:07.014204    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:07.014589 kubelet[1935]: I1213 14:26:07.014224    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:07.014589 kubelet[1935]: I1213 14:26:07.014234    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-host-proc-sys-net\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014589 kubelet[1935]: I1213 14:26:07.014257    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e868b592-5424-4edd-ac10-f805e9d87da3-clustermesh-secrets\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014589 kubelet[1935]: I1213 14:26:07.014279    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24t5m\" (UniqueName: \"kubernetes.io/projected/e868b592-5424-4edd-ac10-f805e9d87da3-kube-api-access-24t5m\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014589 kubelet[1935]: I1213 14:26:07.014296    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-lib-modules\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014745 kubelet[1935]: I1213 14:26:07.014296    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:07.014745 kubelet[1935]: I1213 14:26:07.014317    1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e868b592-5424-4edd-ac10-f805e9d87da3-hubble-tls\") pod \"e868b592-5424-4edd-ac10-f805e9d87da3\" (UID: \"e868b592-5424-4edd-ac10-f805e9d87da3\") "
Dec 13 14:26:07.014745 kubelet[1935]: I1213 14:26:07.014357    1935 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-xtables-lock\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.014745 kubelet[1935]: I1213 14:26:07.014370    1935 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-bpf-maps\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.014745 kubelet[1935]: I1213 14:26:07.014371    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:07.014745 kubelet[1935]: I1213 14:26:07.014380    1935 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-hostproc\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.014745 kubelet[1935]: I1213 14:26:07.014412    1935 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-cgroup\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.014971 kubelet[1935]: I1213 14:26:07.014456    1935 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-run\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.014971 kubelet[1935]: I1213 14:26:07.014480    1935 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.014971 kubelet[1935]: I1213 14:26:07.014543    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cni-path" (OuterVolumeSpecName: "cni-path") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:07.014971 kubelet[1935]: I1213 14:26:07.014578    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:07.015331 kubelet[1935]: I1213 14:26:07.015290    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:07.016631 kubelet[1935]: I1213 14:26:07.016605    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 14:26:07.018534 kubelet[1935]: I1213 14:26:07.018489    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e868b592-5424-4edd-ac10-f805e9d87da3-kube-api-access-24t5m" (OuterVolumeSpecName: "kube-api-access-24t5m") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "kube-api-access-24t5m". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 14:26:07.018780 kubelet[1935]: I1213 14:26:07.018705    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e868b592-5424-4edd-ac10-f805e9d87da3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 14:26:07.018990 kubelet[1935]: I1213 14:26:07.018971    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 14:26:07.019218 systemd[1]: var-lib-kubelet-pods-e868b592\x2d5424\x2d4edd\x2dac10\x2df805e9d87da3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24t5m.mount: Deactivated successfully.
Dec 13 14:26:07.019326 systemd[1]: var-lib-kubelet-pods-e868b592\x2d5424\x2d4edd\x2dac10\x2df805e9d87da3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Dec 13 14:26:07.020551 kubelet[1935]: I1213 14:26:07.020517    1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e868b592-5424-4edd-ac10-f805e9d87da3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e868b592-5424-4edd-ac10-f805e9d87da3" (UID: "e868b592-5424-4edd-ac10-f805e9d87da3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 14:26:07.114809 kubelet[1935]: I1213 14:26:07.114739    1935 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e868b592-5424-4edd-ac10-f805e9d87da3-hubble-tls\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.114809 kubelet[1935]: I1213 14:26:07.114784    1935 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-etc-cni-netd\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.114809 kubelet[1935]: I1213 14:26:07.114794    1935 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.114809 kubelet[1935]: I1213 14:26:07.114808    1935 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-cni-path\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.114809 kubelet[1935]: I1213 14:26:07.114815    1935 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e868b592-5424-4edd-ac10-f805e9d87da3-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.114809 kubelet[1935]: I1213 14:26:07.114822    1935 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.114809 kubelet[1935]: I1213 14:26:07.114829    1935 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e868b592-5424-4edd-ac10-f805e9d87da3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.115213 kubelet[1935]: I1213 14:26:07.114836    1935 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-24t5m\" (UniqueName: \"kubernetes.io/projected/e868b592-5424-4edd-ac10-f805e9d87da3-kube-api-access-24t5m\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.115213 kubelet[1935]: I1213 14:26:07.114844    1935 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e868b592-5424-4edd-ac10-f805e9d87da3-lib-modules\") on node \"localhost\" DevicePath \"\""
Dec 13 14:26:07.717410 systemd[1]: var-lib-kubelet-pods-e868b592\x2d5424\x2d4edd\x2dac10\x2df805e9d87da3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Dec 13 14:26:07.717518 systemd[1]: var-lib-kubelet-pods-e868b592\x2d5424\x2d4edd\x2dac10\x2df805e9d87da3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Dec 13 14:26:07.900103 systemd[1]: Removed slice kubepods-burstable-pode868b592_5424_4edd_ac10_f805e9d87da3.slice.
Dec 13 14:26:07.952773 systemd[1]: Created slice kubepods-burstable-podb9f1743e_a42f_43dc_917c_501a0703f3d2.slice.
Dec 13 14:26:08.120877 kubelet[1935]: I1213 14:26:08.120821    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9f1743e-a42f-43dc-917c-501a0703f3d2-cni-path\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.120877 kubelet[1935]: I1213 14:26:08.120877    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9f1743e-a42f-43dc-917c-501a0703f3d2-clustermesh-secrets\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121408 kubelet[1935]: I1213 14:26:08.120906    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9f1743e-a42f-43dc-917c-501a0703f3d2-hubble-tls\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121408 kubelet[1935]: I1213 14:26:08.121036    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9f1743e-a42f-43dc-917c-501a0703f3d2-xtables-lock\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121408 kubelet[1935]: I1213 14:26:08.121118    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9f1743e-a42f-43dc-917c-501a0703f3d2-host-proc-sys-net\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121408 kubelet[1935]: I1213 14:26:08.121144    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9f1743e-a42f-43dc-917c-501a0703f3d2-host-proc-sys-kernel\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121408 kubelet[1935]: I1213 14:26:08.121165    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9f1743e-a42f-43dc-917c-501a0703f3d2-bpf-maps\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121408 kubelet[1935]: I1213 14:26:08.121186    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9f1743e-a42f-43dc-917c-501a0703f3d2-etc-cni-netd\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121645 kubelet[1935]: I1213 14:26:08.121210    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b9f1743e-a42f-43dc-917c-501a0703f3d2-cilium-ipsec-secrets\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121645 kubelet[1935]: I1213 14:26:08.121230    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltfsr\" (UniqueName: \"kubernetes.io/projected/b9f1743e-a42f-43dc-917c-501a0703f3d2-kube-api-access-ltfsr\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121645 kubelet[1935]: I1213 14:26:08.121253    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9f1743e-a42f-43dc-917c-501a0703f3d2-hostproc\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121645 kubelet[1935]: I1213 14:26:08.121272    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9f1743e-a42f-43dc-917c-501a0703f3d2-cilium-cgroup\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121645 kubelet[1935]: I1213 14:26:08.121304    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9f1743e-a42f-43dc-917c-501a0703f3d2-cilium-config-path\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121645 kubelet[1935]: I1213 14:26:08.121326    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9f1743e-a42f-43dc-917c-501a0703f3d2-cilium-run\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.121821 kubelet[1935]: I1213 14:26:08.121349    1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9f1743e-a42f-43dc-917c-501a0703f3d2-lib-modules\") pod \"cilium-2hffc\" (UID: \"b9f1743e-a42f-43dc-917c-501a0703f3d2\") " pod="kube-system/cilium-2hffc"
Dec 13 14:26:08.244226 kubelet[1935]: E1213 14:26:08.244157    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:08.246076 kubelet[1935]: I1213 14:26:08.246043    1935 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e868b592-5424-4edd-ac10-f805e9d87da3" path="/var/lib/kubelet/pods/e868b592-5424-4edd-ac10-f805e9d87da3/volumes"
Dec 13 14:26:08.255906 kubelet[1935]: E1213 14:26:08.255869    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:08.256420 env[1205]: time="2024-12-13T14:26:08.256370966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2hffc,Uid:b9f1743e-a42f-43dc-917c-501a0703f3d2,Namespace:kube-system,Attempt:0,}"
Dec 13 14:26:08.274864 env[1205]: time="2024-12-13T14:26:08.274774679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:26:08.274864 env[1205]: time="2024-12-13T14:26:08.274812241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:26:08.274864 env[1205]: time="2024-12-13T14:26:08.274823862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:26:08.275117 env[1205]: time="2024-12-13T14:26:08.275027127Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff pid=3814 runtime=io.containerd.runc.v2
Dec 13 14:26:08.288477 systemd[1]: Started cri-containerd-903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff.scope.
Dec 13 14:26:08.308510 env[1205]: time="2024-12-13T14:26:08.308035389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2hffc,Uid:b9f1743e-a42f-43dc-917c-501a0703f3d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff\""
Dec 13 14:26:08.308665 kubelet[1935]: E1213 14:26:08.308558    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:08.312506 env[1205]: time="2024-12-13T14:26:08.312470246Z" level=info msg="CreateContainer within sandbox \"903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Dec 13 14:26:08.329467 env[1205]: time="2024-12-13T14:26:08.329401144Z" level=info msg="CreateContainer within sandbox \"903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2dc7a6fa5e9331f57ca5b5ee514bc797e716cd20e8198f7225a4a91e8dbd9e33\""
Dec 13 14:26:08.330228 env[1205]: time="2024-12-13T14:26:08.330147885Z" level=info msg="StartContainer for \"2dc7a6fa5e9331f57ca5b5ee514bc797e716cd20e8198f7225a4a91e8dbd9e33\""
Dec 13 14:26:08.348140 systemd[1]: Started cri-containerd-2dc7a6fa5e9331f57ca5b5ee514bc797e716cd20e8198f7225a4a91e8dbd9e33.scope.
Dec 13 14:26:08.376425 env[1205]: time="2024-12-13T14:26:08.375515220Z" level=info msg="StartContainer for \"2dc7a6fa5e9331f57ca5b5ee514bc797e716cd20e8198f7225a4a91e8dbd9e33\" returns successfully"
Dec 13 14:26:08.384455 systemd[1]: cri-containerd-2dc7a6fa5e9331f57ca5b5ee514bc797e716cd20e8198f7225a4a91e8dbd9e33.scope: Deactivated successfully.
Dec 13 14:26:08.415369 env[1205]: time="2024-12-13T14:26:08.415313062Z" level=info msg="shim disconnected" id=2dc7a6fa5e9331f57ca5b5ee514bc797e716cd20e8198f7225a4a91e8dbd9e33
Dec 13 14:26:08.415557 env[1205]: time="2024-12-13T14:26:08.415375339Z" level=warning msg="cleaning up after shim disconnected" id=2dc7a6fa5e9331f57ca5b5ee514bc797e716cd20e8198f7225a4a91e8dbd9e33 namespace=k8s.io
Dec 13 14:26:08.415557 env[1205]: time="2024-12-13T14:26:08.415384647Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:08.422225 env[1205]: time="2024-12-13T14:26:08.422172573Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3895 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:08.898899 kubelet[1935]: E1213 14:26:08.898862    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:08.902451 env[1205]: time="2024-12-13T14:26:08.902376055Z" level=info msg="CreateContainer within sandbox \"903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Dec 13 14:26:08.914380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527757589.mount: Deactivated successfully.
Dec 13 14:26:08.915051 env[1205]: time="2024-12-13T14:26:08.914976013Z" level=info msg="CreateContainer within sandbox \"903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd224d697b40a88a46faa7e8557ea5345c3bc24402ba8388987691500b1190f7\""
Dec 13 14:26:08.916858 env[1205]: time="2024-12-13T14:26:08.916798971Z" level=info msg="StartContainer for \"dd224d697b40a88a46faa7e8557ea5345c3bc24402ba8388987691500b1190f7\""
Dec 13 14:26:08.933787 systemd[1]: Started cri-containerd-dd224d697b40a88a46faa7e8557ea5345c3bc24402ba8388987691500b1190f7.scope.
Dec 13 14:26:08.958343 env[1205]: time="2024-12-13T14:26:08.958267532Z" level=info msg="StartContainer for \"dd224d697b40a88a46faa7e8557ea5345c3bc24402ba8388987691500b1190f7\" returns successfully"
Dec 13 14:26:08.964135 systemd[1]: cri-containerd-dd224d697b40a88a46faa7e8557ea5345c3bc24402ba8388987691500b1190f7.scope: Deactivated successfully.
Dec 13 14:26:08.984191 env[1205]: time="2024-12-13T14:26:08.984128317Z" level=info msg="shim disconnected" id=dd224d697b40a88a46faa7e8557ea5345c3bc24402ba8388987691500b1190f7
Dec 13 14:26:08.984191 env[1205]: time="2024-12-13T14:26:08.984188772Z" level=warning msg="cleaning up after shim disconnected" id=dd224d697b40a88a46faa7e8557ea5345c3bc24402ba8388987691500b1190f7 namespace=k8s.io
Dec 13 14:26:08.984191 env[1205]: time="2024-12-13T14:26:08.984200374Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:08.991047 env[1205]: time="2024-12-13T14:26:08.990993440Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3958 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:09.717459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd224d697b40a88a46faa7e8557ea5345c3bc24402ba8388987691500b1190f7-rootfs.mount: Deactivated successfully.
Dec 13 14:26:09.902826 kubelet[1935]: E1213 14:26:09.902783    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:09.904371 env[1205]: time="2024-12-13T14:26:09.904326898Z" level=info msg="CreateContainer within sandbox \"903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Dec 13 14:26:09.919901 env[1205]: time="2024-12-13T14:26:09.919837624Z" level=info msg="CreateContainer within sandbox \"903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fa21c4a2a1170a16fdd331c04f0db8e21c6472fd065b3e9646958e6c6f10db9f\""
Dec 13 14:26:09.920506 env[1205]: time="2024-12-13T14:26:09.920476172Z" level=info msg="StartContainer for \"fa21c4a2a1170a16fdd331c04f0db8e21c6472fd065b3e9646958e6c6f10db9f\""
Dec 13 14:26:09.937830 systemd[1]: Started cri-containerd-fa21c4a2a1170a16fdd331c04f0db8e21c6472fd065b3e9646958e6c6f10db9f.scope.
Dec 13 14:26:09.962401 env[1205]: time="2024-12-13T14:26:09.962349315Z" level=info msg="StartContainer for \"fa21c4a2a1170a16fdd331c04f0db8e21c6472fd065b3e9646958e6c6f10db9f\" returns successfully"
Dec 13 14:26:09.964590 systemd[1]: cri-containerd-fa21c4a2a1170a16fdd331c04f0db8e21c6472fd065b3e9646958e6c6f10db9f.scope: Deactivated successfully.
Dec 13 14:26:09.985949 env[1205]: time="2024-12-13T14:26:09.985790838Z" level=info msg="shim disconnected" id=fa21c4a2a1170a16fdd331c04f0db8e21c6472fd065b3e9646958e6c6f10db9f
Dec 13 14:26:09.985949 env[1205]: time="2024-12-13T14:26:09.985852515Z" level=warning msg="cleaning up after shim disconnected" id=fa21c4a2a1170a16fdd331c04f0db8e21c6472fd065b3e9646958e6c6f10db9f namespace=k8s.io
Dec 13 14:26:09.985949 env[1205]: time="2024-12-13T14:26:09.985865890Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:09.992666 env[1205]: time="2024-12-13T14:26:09.992615802Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4014 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:10.298435 kubelet[1935]: E1213 14:26:10.298274    1935 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 13 14:26:10.717401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa21c4a2a1170a16fdd331c04f0db8e21c6472fd065b3e9646958e6c6f10db9f-rootfs.mount: Deactivated successfully.
Dec 13 14:26:10.906883 kubelet[1935]: E1213 14:26:10.906847    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:10.909397 env[1205]: time="2024-12-13T14:26:10.909351291Z" level=info msg="CreateContainer within sandbox \"903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Dec 13 14:26:10.928048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522586421.mount: Deactivated successfully.
Dec 13 14:26:10.931758 env[1205]: time="2024-12-13T14:26:10.931707345Z" level=info msg="CreateContainer within sandbox \"903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"62879a2d7ff2a9be15bc57551ca16914a13b7e2c10920561355164077f996663\""
Dec 13 14:26:10.932286 env[1205]: time="2024-12-13T14:26:10.932260751Z" level=info msg="StartContainer for \"62879a2d7ff2a9be15bc57551ca16914a13b7e2c10920561355164077f996663\""
Dec 13 14:26:10.945176 systemd[1]: Started cri-containerd-62879a2d7ff2a9be15bc57551ca16914a13b7e2c10920561355164077f996663.scope.
Dec 13 14:26:10.967757 systemd[1]: cri-containerd-62879a2d7ff2a9be15bc57551ca16914a13b7e2c10920561355164077f996663.scope: Deactivated successfully.
Dec 13 14:26:10.969081 env[1205]: time="2024-12-13T14:26:10.969026907Z" level=info msg="StartContainer for \"62879a2d7ff2a9be15bc57551ca16914a13b7e2c10920561355164077f996663\" returns successfully"
Dec 13 14:26:10.987491 env[1205]: time="2024-12-13T14:26:10.987430599Z" level=info msg="shim disconnected" id=62879a2d7ff2a9be15bc57551ca16914a13b7e2c10920561355164077f996663
Dec 13 14:26:10.987491 env[1205]: time="2024-12-13T14:26:10.987489480Z" level=warning msg="cleaning up after shim disconnected" id=62879a2d7ff2a9be15bc57551ca16914a13b7e2c10920561355164077f996663 namespace=k8s.io
Dec 13 14:26:10.987491 env[1205]: time="2024-12-13T14:26:10.987497445Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:10.993432 env[1205]: time="2024-12-13T14:26:10.993393070Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4069 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:11.717452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62879a2d7ff2a9be15bc57551ca16914a13b7e2c10920561355164077f996663-rootfs.mount: Deactivated successfully.
Dec 13 14:26:11.910489 kubelet[1935]: E1213 14:26:11.910461    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:11.912031 env[1205]: time="2024-12-13T14:26:11.911991399Z" level=info msg="CreateContainer within sandbox \"903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Dec 13 14:26:12.191828 env[1205]: time="2024-12-13T14:26:12.191731174Z" level=info msg="CreateContainer within sandbox \"903d5497af65e6e6f206b3406b792db5512e0ede4bc98c24296d4cad93feedff\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ca58f5ea08e2fc2276cb84ff7e48093d92db74da84b60a8445674606ccf81b3e\""
Dec 13 14:26:12.192551 env[1205]: time="2024-12-13T14:26:12.192511859Z" level=info msg="StartContainer for \"ca58f5ea08e2fc2276cb84ff7e48093d92db74da84b60a8445674606ccf81b3e\""
Dec 13 14:26:12.209302 systemd[1]: Started cri-containerd-ca58f5ea08e2fc2276cb84ff7e48093d92db74da84b60a8445674606ccf81b3e.scope.
Dec 13 14:26:12.243684 kubelet[1935]: E1213 14:26:12.243341    1935 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-b7454" podUID="801f1ea3-ba1e-49c7-9354-7208b4c702bd"
Dec 13 14:26:12.256839 env[1205]: time="2024-12-13T14:26:12.256352139Z" level=info msg="StartContainer for \"ca58f5ea08e2fc2276cb84ff7e48093d92db74da84b60a8445674606ccf81b3e\" returns successfully"
Dec 13 14:26:12.400680 kubelet[1935]: I1213 14:26:12.400629    1935 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:26:12Z","lastTransitionTime":"2024-12-13T14:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Dec 13 14:26:12.492090 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Dec 13 14:26:12.914948 kubelet[1935]: E1213 14:26:12.914901    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:12.929645 kubelet[1935]: I1213 14:26:12.929365    1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2hffc" podStartSLOduration=5.929341718 podStartE2EDuration="5.929341718s" podCreationTimestamp="2024-12-13 14:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:26:12.928907026 +0000 UTC m=+102.783415742" watchObservedRunningTime="2024-12-13 14:26:12.929341718 +0000 UTC m=+102.783850404"
Dec 13 14:26:13.244315 kubelet[1935]: E1213 14:26:13.244181    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:14.243563 kubelet[1935]: E1213 14:26:14.243477    1935 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-b7454" podUID="801f1ea3-ba1e-49c7-9354-7208b4c702bd"
Dec 13 14:26:14.256532 kubelet[1935]: E1213 14:26:14.256498    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:15.124328 systemd-networkd[1032]: lxc_health: Link UP
Dec 13 14:26:15.146169 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Dec 13 14:26:15.146416 systemd-networkd[1032]: lxc_health: Gained carrier
Dec 13 14:26:16.244497 kubelet[1935]: E1213 14:26:16.244115    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:16.257411 kubelet[1935]: E1213 14:26:16.257377    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:16.316285 systemd-networkd[1032]: lxc_health: Gained IPv6LL
Dec 13 14:26:16.866015 systemd[1]: run-containerd-runc-k8s.io-ca58f5ea08e2fc2276cb84ff7e48093d92db74da84b60a8445674606ccf81b3e-runc.wJ6B4n.mount: Deactivated successfully.
Dec 13 14:26:16.925380 kubelet[1935]: E1213 14:26:16.925339    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:17.928071 kubelet[1935]: E1213 14:26:17.927989    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:19.243328 kubelet[1935]: E1213 14:26:19.243270    1935 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 14:26:21.116570 sshd[3783]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:21.118963 systemd[1]: sshd@27-10.0.0.77:22-10.0.0.1:49092.service: Deactivated successfully.
Dec 13 14:26:21.119765 systemd[1]: session-28.scope: Deactivated successfully.
Dec 13 14:26:21.120414 systemd-logind[1187]: Session 28 logged out. Waiting for processes to exit.
Dec 13 14:26:21.121203 systemd-logind[1187]: Removed session 28.