Feb 12 19:16:35.741133 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Feb 12 19:16:35.741153 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024
Feb 12 19:16:35.741161 kernel: efi: EFI v2.70 by EDK II
Feb 12 19:16:35.741166 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 
Feb 12 19:16:35.741171 kernel: random: crng init done
Feb 12 19:16:35.741176 kernel: ACPI: Early table checksum verification disabled
Feb 12 19:16:35.741183 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS )
Feb 12 19:16:35.741189 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS  BXPC     00000001      01000013)
Feb 12 19:16:35.741195 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 12 19:16:35.741200 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 12 19:16:35.741205 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 12 19:16:35.741211 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 12 19:16:35.741216 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 12 19:16:35.741221 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 12 19:16:35.741229 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 12 19:16:35.741234 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 12 19:16:35.741240 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 12 19:16:35.741246 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Feb 12 19:16:35.741252 kernel: NUMA: Failed to initialise from firmware
Feb 12 19:16:35.741258 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Feb 12 19:16:35.741263 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff]
Feb 12 19:16:35.741269 kernel: Zone ranges:
Feb 12 19:16:35.741274 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Feb 12 19:16:35.741281 kernel:   DMA32    empty
Feb 12 19:16:35.741286 kernel:   Normal   empty
Feb 12 19:16:35.741292 kernel: Movable zone start for each node
Feb 12 19:16:35.741298 kernel: Early memory node ranges
Feb 12 19:16:35.741303 kernel:   node   0: [mem 0x0000000040000000-0x00000000d924ffff]
Feb 12 19:16:35.741309 kernel:   node   0: [mem 0x00000000d9250000-0x00000000d951ffff]
Feb 12 19:16:35.741314 kernel:   node   0: [mem 0x00000000d9520000-0x00000000dc7fffff]
Feb 12 19:16:35.741320 kernel:   node   0: [mem 0x00000000dc800000-0x00000000dc88ffff]
Feb 12 19:16:35.741325 kernel:   node   0: [mem 0x00000000dc890000-0x00000000dc89ffff]
Feb 12 19:16:35.741331 kernel:   node   0: [mem 0x00000000dc8a0000-0x00000000dc9bffff]
Feb 12 19:16:35.741337 kernel:   node   0: [mem 0x00000000dc9c0000-0x00000000dcffffff]
Feb 12 19:16:35.741342 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Feb 12 19:16:35.741350 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Feb 12 19:16:35.741355 kernel: psci: probing for conduit method from ACPI.
Feb 12 19:16:35.741361 kernel: psci: PSCIv1.1 detected in firmware.
Feb 12 19:16:35.741366 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 12 19:16:35.741372 kernel: psci: Trusted OS migration not required
Feb 12 19:16:35.741380 kernel: psci: SMC Calling Convention v1.1
Feb 12 19:16:35.741386 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Feb 12 19:16:35.741393 kernel: ACPI: SRAT not present
Feb 12 19:16:35.741399 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784
Feb 12 19:16:35.741405 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096
Feb 12 19:16:35.741412 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Feb 12 19:16:35.741418 kernel: Detected PIPT I-cache on CPU0
Feb 12 19:16:35.741424 kernel: CPU features: detected: GIC system register CPU interface
Feb 12 19:16:35.741430 kernel: CPU features: detected: Hardware dirty bit management
Feb 12 19:16:35.741436 kernel: CPU features: detected: Spectre-v4
Feb 12 19:16:35.741442 kernel: CPU features: detected: Spectre-BHB
Feb 12 19:16:35.741449 kernel: CPU features: kernel page table isolation forced ON by KASLR
Feb 12 19:16:35.741455 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Feb 12 19:16:35.741461 kernel: CPU features: detected: ARM erratum 1418040
Feb 12 19:16:35.741467 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Feb 12 19:16:35.741473 kernel: Policy zone: DMA
Feb 12 19:16:35.741479 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40
Feb 12 19:16:35.741486 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 12 19:16:35.741492 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 12 19:16:35.741498 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 12 19:16:35.741504 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 12 19:16:35.741510 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved)
Feb 12 19:16:35.741517 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Feb 12 19:16:35.741524 kernel: trace event string verifier disabled
Feb 12 19:16:35.741530 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 12 19:16:35.741537 kernel: rcu:         RCU event tracing is enabled.
Feb 12 19:16:35.741543 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Feb 12 19:16:35.741549 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 12 19:16:35.741555 kernel:         Tracing variant of Tasks RCU enabled.
Feb 12 19:16:35.741561 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 12 19:16:35.741567 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Feb 12 19:16:35.741573 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 12 19:16:35.741579 kernel: GICv3: 256 SPIs implemented
Feb 12 19:16:35.741603 kernel: GICv3: 0 Extended SPIs implemented
Feb 12 19:16:35.741609 kernel: GICv3: Distributor has no Range Selector support
Feb 12 19:16:35.741615 kernel: Root IRQ handler: gic_handle_irq
Feb 12 19:16:35.741621 kernel: GICv3: 16 PPIs implemented
Feb 12 19:16:35.741627 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Feb 12 19:16:35.741633 kernel: ACPI: SRAT not present
Feb 12 19:16:35.741638 kernel: ITS [mem 0x08080000-0x0809ffff]
Feb 12 19:16:35.741645 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1)
Feb 12 19:16:35.741651 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1)
Feb 12 19:16:35.741657 kernel: GICv3: using LPI property table @0x00000000400d0000
Feb 12 19:16:35.741663 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000
Feb 12 19:16:35.741669 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 12 19:16:35.741677 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Feb 12 19:16:35.741683 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Feb 12 19:16:35.741689 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Feb 12 19:16:35.741695 kernel: arm-pv: using stolen time PV
Feb 12 19:16:35.741701 kernel: Console: colour dummy device 80x25
Feb 12 19:16:35.741707 kernel: ACPI: Core revision 20210730
Feb 12 19:16:35.741714 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Feb 12 19:16:35.741720 kernel: pid_max: default: 32768 minimum: 301
Feb 12 19:16:35.741726 kernel: LSM: Security Framework initializing
Feb 12 19:16:35.741732 kernel: SELinux:  Initializing.
Feb 12 19:16:35.741739 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 12 19:16:35.741746 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 12 19:16:35.741752 kernel: rcu: Hierarchical SRCU implementation.
Feb 12 19:16:35.741758 kernel: Platform MSI: ITS@0x8080000 domain created
Feb 12 19:16:35.741764 kernel: PCI/MSI: ITS@0x8080000 domain created
Feb 12 19:16:35.741770 kernel: Remapping and enabling EFI services.
Feb 12 19:16:35.741776 kernel: smp: Bringing up secondary CPUs ...
Feb 12 19:16:35.741782 kernel: Detected PIPT I-cache on CPU1
Feb 12 19:16:35.741789 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Feb 12 19:16:35.741796 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000
Feb 12 19:16:35.741803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 12 19:16:35.741809 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Feb 12 19:16:35.741815 kernel: Detected PIPT I-cache on CPU2
Feb 12 19:16:35.741821 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Feb 12 19:16:35.741828 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000
Feb 12 19:16:35.741834 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 12 19:16:35.741840 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Feb 12 19:16:35.741846 kernel: Detected PIPT I-cache on CPU3
Feb 12 19:16:35.741852 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Feb 12 19:16:35.741860 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000
Feb 12 19:16:35.741866 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 12 19:16:35.741872 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Feb 12 19:16:35.741878 kernel: smp: Brought up 1 node, 4 CPUs
Feb 12 19:16:35.741888 kernel: SMP: Total of 4 processors activated.
Feb 12 19:16:35.741896 kernel: CPU features: detected: 32-bit EL0 Support
Feb 12 19:16:35.741902 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Feb 12 19:16:35.741909 kernel: CPU features: detected: Common not Private translations
Feb 12 19:16:35.741915 kernel: CPU features: detected: CRC32 instructions
Feb 12 19:16:35.741922 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Feb 12 19:16:35.741928 kernel: CPU features: detected: LSE atomic instructions
Feb 12 19:16:35.741935 kernel: CPU features: detected: Privileged Access Never
Feb 12 19:16:35.741943 kernel: CPU features: detected: RAS Extension Support
Feb 12 19:16:35.741949 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Feb 12 19:16:35.741955 kernel: CPU: All CPU(s) started at EL1
Feb 12 19:16:35.741962 kernel: alternatives: patching kernel code
Feb 12 19:16:35.741969 kernel: devtmpfs: initialized
Feb 12 19:16:35.741976 kernel: KASLR enabled
Feb 12 19:16:35.741983 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 12 19:16:35.741989 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Feb 12 19:16:35.741996 kernel: pinctrl core: initialized pinctrl subsystem
Feb 12 19:16:35.742002 kernel: SMBIOS 3.0.0 present.
Feb 12 19:16:35.742009 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Feb 12 19:16:35.742016 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 12 19:16:35.742022 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 12 19:16:35.742029 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 12 19:16:35.742037 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 12 19:16:35.742043 kernel: audit: initializing netlink subsys (disabled)
Feb 12 19:16:35.742050 kernel: audit: type=2000 audit(0.043:1): state=initialized audit_enabled=0 res=1
Feb 12 19:16:35.742057 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 12 19:16:35.742063 kernel: cpuidle: using governor menu
Feb 12 19:16:35.742069 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 12 19:16:35.742076 kernel: ASID allocator initialised with 32768 entries
Feb 12 19:16:35.742082 kernel: ACPI: bus type PCI registered
Feb 12 19:16:35.742089 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 12 19:16:35.742097 kernel: Serial: AMBA PL011 UART driver
Feb 12 19:16:35.742103 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Feb 12 19:16:35.742110 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
Feb 12 19:16:35.742117 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Feb 12 19:16:35.742123 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
Feb 12 19:16:35.742130 kernel: cryptd: max_cpu_qlen set to 1000
Feb 12 19:16:35.742136 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 12 19:16:35.742143 kernel: ACPI: Added _OSI(Module Device)
Feb 12 19:16:35.742149 kernel: ACPI: Added _OSI(Processor Device)
Feb 12 19:16:35.742157 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 12 19:16:35.742163 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 12 19:16:35.742170 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Feb 12 19:16:35.742176 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Feb 12 19:16:35.742183 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Feb 12 19:16:35.742189 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 12 19:16:35.742196 kernel: ACPI: Interpreter enabled
Feb 12 19:16:35.742202 kernel: ACPI: Using GIC for interrupt routing
Feb 12 19:16:35.742209 kernel: ACPI: MCFG table detected, 1 entries
Feb 12 19:16:35.742216 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Feb 12 19:16:35.742223 kernel: printk: console [ttyAMA0] enabled
Feb 12 19:16:35.742229 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 12 19:16:35.742358 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb 12 19:16:35.742422 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Feb 12 19:16:35.742481 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Feb 12 19:16:35.742541 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Feb 12 19:16:35.742628 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Feb 12 19:16:35.742638 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Feb 12 19:16:35.742644 kernel: PCI host bridge to bus 0000:00
Feb 12 19:16:35.742718 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Feb 12 19:16:35.742774 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Feb 12 19:16:35.742826 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Feb 12 19:16:35.742879 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 12 19:16:35.742954 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Feb 12 19:16:35.743031 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Feb 12 19:16:35.743094 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Feb 12 19:16:35.743156 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Feb 12 19:16:35.743216 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Feb 12 19:16:35.743281 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Feb 12 19:16:35.743341 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Feb 12 19:16:35.743406 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Feb 12 19:16:35.743461 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Feb 12 19:16:35.743564 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Feb 12 19:16:35.743640 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Feb 12 19:16:35.743650 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Feb 12 19:16:35.743657 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Feb 12 19:16:35.743664 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Feb 12 19:16:35.743673 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Feb 12 19:16:35.743680 kernel: iommu: Default domain type: Translated 
Feb 12 19:16:35.743686 kernel: iommu: DMA domain TLB invalidation policy: strict mode 
Feb 12 19:16:35.743693 kernel: vgaarb: loaded
Feb 12 19:16:35.743699 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 12 19:16:35.743706 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 12 19:16:35.743713 kernel: PTP clock support registered
Feb 12 19:16:35.743719 kernel: Registered efivars operations
Feb 12 19:16:35.743726 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 12 19:16:35.743732 kernel: VFS: Disk quotas dquot_6.6.0
Feb 12 19:16:35.743740 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 12 19:16:35.743747 kernel: pnp: PnP ACPI init
Feb 12 19:16:35.743816 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Feb 12 19:16:35.743826 kernel: pnp: PnP ACPI: found 1 devices
Feb 12 19:16:35.743833 kernel: NET: Registered PF_INET protocol family
Feb 12 19:16:35.743839 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 12 19:16:35.743846 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 12 19:16:35.743853 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 12 19:16:35.743861 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 12 19:16:35.743868 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear)
Feb 12 19:16:35.743874 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 12 19:16:35.743881 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 12 19:16:35.743888 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 12 19:16:35.743894 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 12 19:16:35.743901 kernel: PCI: CLS 0 bytes, default 64
Feb 12 19:16:35.743908 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Feb 12 19:16:35.743914 kernel: kvm [1]: HYP mode not available
Feb 12 19:16:35.743923 kernel: Initialise system trusted keyrings
Feb 12 19:16:35.743929 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 12 19:16:35.743936 kernel: Key type asymmetric registered
Feb 12 19:16:35.743942 kernel: Asymmetric key parser 'x509' registered
Feb 12 19:16:35.743949 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Feb 12 19:16:35.743955 kernel: io scheduler mq-deadline registered
Feb 12 19:16:35.743962 kernel: io scheduler kyber registered
Feb 12 19:16:35.743968 kernel: io scheduler bfq registered
Feb 12 19:16:35.743975 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Feb 12 19:16:35.743982 kernel: ACPI: button: Power Button [PWRB]
Feb 12 19:16:35.743989 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Feb 12 19:16:35.744052 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Feb 12 19:16:35.744061 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 12 19:16:35.744068 kernel: thunder_xcv, ver 1.0
Feb 12 19:16:35.744074 kernel: thunder_bgx, ver 1.0
Feb 12 19:16:35.744081 kernel: nicpf, ver 1.0
Feb 12 19:16:35.744087 kernel: nicvf, ver 1.0
Feb 12 19:16:35.744154 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 12 19:16:35.744214 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:16:35 UTC (1707765395)
Feb 12 19:16:35.744222 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 12 19:16:35.744229 kernel: NET: Registered PF_INET6 protocol family
Feb 12 19:16:35.744235 kernel: Segment Routing with IPv6
Feb 12 19:16:35.744242 kernel: In-situ OAM (IOAM) with IPv6
Feb 12 19:16:35.744248 kernel: NET: Registered PF_PACKET protocol family
Feb 12 19:16:35.744255 kernel: Key type dns_resolver registered
Feb 12 19:16:35.744262 kernel: registered taskstats version 1
Feb 12 19:16:35.744269 kernel: Loading compiled-in X.509 certificates
Feb 12 19:16:35.744276 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c'
Feb 12 19:16:35.744283 kernel: Key type .fscrypt registered
Feb 12 19:16:35.744289 kernel: Key type fscrypt-provisioning registered
Feb 12 19:16:35.744296 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 12 19:16:35.744302 kernel: ima: Allocated hash algorithm: sha1
Feb 12 19:16:35.744309 kernel: ima: No architecture policies found
Feb 12 19:16:35.744315 kernel: Freeing unused kernel memory: 34688K
Feb 12 19:16:35.744321 kernel: Run /init as init process
Feb 12 19:16:35.744329 kernel:   with arguments:
Feb 12 19:16:35.744336 kernel:     /init
Feb 12 19:16:35.744342 kernel:   with environment:
Feb 12 19:16:35.744348 kernel:     HOME=/
Feb 12 19:16:35.744355 kernel:     TERM=linux
Feb 12 19:16:35.744361 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 12 19:16:35.744369 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 12 19:16:35.744378 systemd[1]: Detected virtualization kvm.
Feb 12 19:16:35.744387 systemd[1]: Detected architecture arm64.
Feb 12 19:16:35.744393 systemd[1]: Running in initrd.
Feb 12 19:16:35.744400 systemd[1]: No hostname configured, using default hostname.
Feb 12 19:16:35.744407 systemd[1]: Hostname set to <localhost>.
Feb 12 19:16:35.744414 systemd[1]: Initializing machine ID from VM UUID.
Feb 12 19:16:35.744421 systemd[1]: Queued start job for default target initrd.target.
Feb 12 19:16:35.744428 systemd[1]: Started systemd-ask-password-console.path.
Feb 12 19:16:35.744435 systemd[1]: Reached target cryptsetup.target.
Feb 12 19:16:35.744443 systemd[1]: Reached target paths.target.
Feb 12 19:16:35.744450 systemd[1]: Reached target slices.target.
Feb 12 19:16:35.744457 systemd[1]: Reached target swap.target.
Feb 12 19:16:35.744464 systemd[1]: Reached target timers.target.
Feb 12 19:16:35.744471 systemd[1]: Listening on iscsid.socket.
Feb 12 19:16:35.744478 systemd[1]: Listening on iscsiuio.socket.
Feb 12 19:16:35.744485 systemd[1]: Listening on systemd-journald-audit.socket.
Feb 12 19:16:35.744493 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb 12 19:16:35.744501 systemd[1]: Listening on systemd-journald.socket.
Feb 12 19:16:35.744508 systemd[1]: Listening on systemd-networkd.socket.
Feb 12 19:16:35.744515 systemd[1]: Listening on systemd-udevd-control.socket.
Feb 12 19:16:35.744522 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb 12 19:16:35.744529 systemd[1]: Reached target sockets.target.
Feb 12 19:16:35.744536 systemd[1]: Starting kmod-static-nodes.service...
Feb 12 19:16:35.744543 systemd[1]: Finished network-cleanup.service.
Feb 12 19:16:35.744550 systemd[1]: Starting systemd-fsck-usr.service...
Feb 12 19:16:35.744559 systemd[1]: Starting systemd-journald.service...
Feb 12 19:16:35.744566 systemd[1]: Starting systemd-modules-load.service...
Feb 12 19:16:35.744573 systemd[1]: Starting systemd-resolved.service...
Feb 12 19:16:35.744580 systemd[1]: Starting systemd-vconsole-setup.service...
Feb 12 19:16:35.744602 systemd[1]: Finished kmod-static-nodes.service.
Feb 12 19:16:35.744610 systemd[1]: Finished systemd-fsck-usr.service.
Feb 12 19:16:35.744617 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb 12 19:16:35.744624 systemd[1]: Finished systemd-vconsole-setup.service.
Feb 12 19:16:35.744632 kernel: audit: type=1130 audit(1707765395.743:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.744640 systemd[1]: Starting dracut-cmdline-ask.service...
Feb 12 19:16:35.744651 systemd-journald[290]: Journal started
Feb 12 19:16:35.744692 systemd-journald[290]: Runtime Journal (/run/log/journal/f0c9fe135708468aa4f764734dbd4195) is 6.0M, max 48.7M, 42.6M free.
Feb 12 19:16:35.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.736343 systemd-modules-load[291]: Inserted module 'overlay'
Feb 12 19:16:35.748039 systemd[1]: Started systemd-journald.service.
Feb 12 19:16:35.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.748487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb 12 19:16:35.753396 kernel: audit: type=1130 audit(1707765395.748:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.753417 kernel: audit: type=1130 audit(1707765395.750:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.758621 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 12 19:16:35.762448 systemd-modules-load[291]: Inserted module 'br_netfilter'
Feb 12 19:16:35.763341 kernel: Bridge firewalling registered
Feb 12 19:16:35.765254 systemd[1]: Finished dracut-cmdline-ask.service.
Feb 12 19:16:35.769535 kernel: audit: type=1130 audit(1707765395.765:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.766653 systemd-resolved[292]: Positive Trust Anchors:
Feb 12 19:16:35.766661 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 12 19:16:35.776519 kernel: audit: type=1130 audit(1707765395.772:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.766692 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb 12 19:16:35.766797 systemd[1]: Starting dracut-cmdline.service...
Feb 12 19:16:35.782204 kernel: SCSI subsystem initialized
Feb 12 19:16:35.782222 dracut-cmdline[307]: dracut-dracut-053
Feb 12 19:16:35.782222 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40
Feb 12 19:16:35.770935 systemd-resolved[292]: Defaulting to hostname 'linux'.
Feb 12 19:16:35.771994 systemd[1]: Started systemd-resolved.service.
Feb 12 19:16:35.772860 systemd[1]: Reached target nss-lookup.target.
Feb 12 19:16:35.791099 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 12 19:16:35.791120 kernel: device-mapper: uevent: version 1.0.3
Feb 12 19:16:35.791130 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Feb 12 19:16:35.793381 systemd-modules-load[291]: Inserted module 'dm_multipath'
Feb 12 19:16:35.805284 kernel: audit: type=1130 audit(1707765395.794:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.794311 systemd[1]: Finished systemd-modules-load.service.
Feb 12 19:16:35.795987 systemd[1]: Starting systemd-sysctl.service...
Feb 12 19:16:35.806833 systemd[1]: Finished systemd-sysctl.service.
Feb 12 19:16:35.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.810616 kernel: audit: type=1130 audit(1707765395.807:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.839619 kernel: Loading iSCSI transport class v2.0-870.
Feb 12 19:16:35.849615 kernel: iscsi: registered transport (tcp)
Feb 12 19:16:35.862609 kernel: iscsi: registered transport (qla4xxx)
Feb 12 19:16:35.862624 kernel: QLogic iSCSI HBA Driver
Feb 12 19:16:35.898589 systemd[1]: Finished dracut-cmdline.service.
Feb 12 19:16:35.901626 kernel: audit: type=1130 audit(1707765395.898:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:35.900307 systemd[1]: Starting dracut-pre-udev.service...
Feb 12 19:16:35.944623 kernel: raid6: neonx8   gen() 13797 MB/s
Feb 12 19:16:35.961608 kernel: raid6: neonx8   xor() 10375 MB/s
Feb 12 19:16:35.978605 kernel: raid6: neonx4   gen() 13565 MB/s
Feb 12 19:16:35.995604 kernel: raid6: neonx4   xor() 11165 MB/s
Feb 12 19:16:36.012611 kernel: raid6: neonx2   gen() 12988 MB/s
Feb 12 19:16:36.029607 kernel: raid6: neonx2   xor() 10237 MB/s
Feb 12 19:16:36.046614 kernel: raid6: neonx1   gen() 10501 MB/s
Feb 12 19:16:36.063610 kernel: raid6: neonx1   xor()  8754 MB/s
Feb 12 19:16:36.080607 kernel: raid6: int64x8  gen()  6292 MB/s
Feb 12 19:16:36.097603 kernel: raid6: int64x8  xor()  3547 MB/s
Feb 12 19:16:36.114605 kernel: raid6: int64x4  gen()  7258 MB/s
Feb 12 19:16:36.131606 kernel: raid6: int64x4  xor()  3852 MB/s
Feb 12 19:16:36.148625 kernel: raid6: int64x2  gen()  6152 MB/s
Feb 12 19:16:36.165619 kernel: raid6: int64x2  xor()  3319 MB/s
Feb 12 19:16:36.182610 kernel: raid6: int64x1  gen()  5046 MB/s
Feb 12 19:16:36.199976 kernel: raid6: int64x1  xor()  2493 MB/s
Feb 12 19:16:36.200003 kernel: raid6: using algorithm neonx8 gen() 13797 MB/s
Feb 12 19:16:36.200012 kernel: raid6: .... xor() 10375 MB/s, rmw enabled
Feb 12 19:16:36.200021 kernel: raid6: using neon recovery algorithm
Feb 12 19:16:36.210620 kernel: xor: measuring software checksum speed
Feb 12 19:16:36.211608 kernel:    8regs           : 17322 MB/sec
Feb 12 19:16:36.212612 kernel:    32regs          : 20744 MB/sec
Feb 12 19:16:36.212624 kernel:    arm64_neon      : 27873 MB/sec
Feb 12 19:16:36.213879 kernel: xor: using function: arm64_neon (27873 MB/sec)
Feb 12 19:16:36.270622 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
Feb 12 19:16:36.281094 systemd[1]: Finished dracut-pre-udev.service.
Feb 12 19:16:36.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:36.284000 audit: BPF prog-id=7 op=LOAD
Feb 12 19:16:36.286600 kernel: audit: type=1130 audit(1707765396.281:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:36.284000 audit: BPF prog-id=8 op=LOAD
Feb 12 19:16:36.285383 systemd[1]: Starting systemd-udevd.service...
Feb 12 19:16:36.302733 systemd-udevd[493]: Using default interface naming scheme 'v252'.
Feb 12 19:16:36.306304 systemd[1]: Started systemd-udevd.service.
Feb 12 19:16:36.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:36.308188 systemd[1]: Starting dracut-pre-trigger.service...
Feb 12 19:16:36.322562 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation
Feb 12 19:16:36.354150 systemd[1]: Finished dracut-pre-trigger.service.
Feb 12 19:16:36.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:36.355885 systemd[1]: Starting systemd-udev-trigger.service...
Feb 12 19:16:36.406974 systemd[1]: Finished systemd-udev-trigger.service.
Feb 12 19:16:36.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:36.455654 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Feb 12 19:16:36.458797 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 12 19:16:36.458818 kernel: GPT:9289727 != 19775487
Feb 12 19:16:36.458827 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 12 19:16:36.458836 kernel: GPT:9289727 != 19775487
Feb 12 19:16:36.459939 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 12 19:16:36.459951 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 12 19:16:36.478619 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (553)
Feb 12 19:16:36.480828 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Feb 12 19:16:36.483777 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Feb 12 19:16:36.484760 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Feb 12 19:16:36.492324 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Feb 12 19:16:36.495684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb 12 19:16:36.497387 systemd[1]: Starting disk-uuid.service...
Feb 12 19:16:36.503336 disk-uuid[569]: Primary Header is updated.
Feb 12 19:16:36.503336 disk-uuid[569]: Secondary Entries is updated.
Feb 12 19:16:36.503336 disk-uuid[569]: Secondary Header is updated.
Feb 12 19:16:36.506610 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 12 19:16:37.519615 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 12 19:16:37.519812 disk-uuid[570]: The operation has completed successfully.
Feb 12 19:16:37.540275 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 12 19:16:37.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.540367 systemd[1]: Finished disk-uuid.service.
Feb 12 19:16:37.544488 systemd[1]: Starting verity-setup.service...
Feb 12 19:16:37.560631 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 12 19:16:37.583243 systemd[1]: Found device dev-mapper-usr.device.
Feb 12 19:16:37.585518 systemd[1]: Mounting sysusr-usr.mount...
Feb 12 19:16:37.587314 systemd[1]: Finished verity-setup.service.
Feb 12 19:16:37.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.640615 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Feb 12 19:16:37.641161 systemd[1]: Mounted sysusr-usr.mount.
Feb 12 19:16:37.642016 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Feb 12 19:16:37.642774 systemd[1]: Starting ignition-setup.service...
Feb 12 19:16:37.644679 systemd[1]: Starting parse-ip-for-networkd.service...
Feb 12 19:16:37.652920 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 12 19:16:37.652970 kernel: BTRFS info (device vda6): using free space tree
Feb 12 19:16:37.652980 kernel: BTRFS info (device vda6): has skinny extents
Feb 12 19:16:37.663260 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 12 19:16:37.670148 systemd[1]: Finished ignition-setup.service.
Feb 12 19:16:37.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.671801 systemd[1]: Starting ignition-fetch-offline.service...
Feb 12 19:16:37.742586 systemd[1]: Finished parse-ip-for-networkd.service.
Feb 12 19:16:37.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.743000 audit: BPF prog-id=9 op=LOAD
Feb 12 19:16:37.744850 systemd[1]: Starting systemd-networkd.service...
Feb 12 19:16:37.770722 ignition[659]: Ignition 2.14.0
Feb 12 19:16:37.770731 ignition[659]: Stage: fetch-offline
Feb 12 19:16:37.770769 ignition[659]: no configs at "/usr/lib/ignition/base.d"
Feb 12 19:16:37.770778 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 12 19:16:37.770904 ignition[659]: parsed url from cmdline: ""
Feb 12 19:16:37.770907 ignition[659]: no config URL provided
Feb 12 19:16:37.770912 ignition[659]: reading system config file "/usr/lib/ignition/user.ign"
Feb 12 19:16:37.770919 ignition[659]: no config at "/usr/lib/ignition/user.ign"
Feb 12 19:16:37.770938 ignition[659]: op(1): [started]  loading QEMU firmware config module
Feb 12 19:16:37.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.777654 systemd-networkd[744]: lo: Link UP
Feb 12 19:16:37.770942 ignition[659]: op(1): executing: "modprobe" "qemu_fw_cfg"
Feb 12 19:16:37.777658 systemd-networkd[744]: lo: Gained carrier
Feb 12 19:16:37.776668 ignition[659]: op(1): [finished] loading QEMU firmware config module
Feb 12 19:16:37.778002 systemd-networkd[744]: Enumeration completed
Feb 12 19:16:37.778099 systemd[1]: Started systemd-networkd.service.
Feb 12 19:16:37.778174 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 12 19:16:37.779238 systemd[1]: Reached target network.target.
Feb 12 19:16:37.781283 systemd[1]: Starting iscsiuio.service...
Feb 12 19:16:37.787286 systemd-networkd[744]: eth0: Link UP
Feb 12 19:16:37.787293 systemd-networkd[744]: eth0: Gained carrier
Feb 12 19:16:37.790027 systemd[1]: Started iscsiuio.service.
Feb 12 19:16:37.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.791519 systemd[1]: Starting iscsid.service...
Feb 12 19:16:37.795013 iscsid[750]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Feb 12 19:16:37.795013 iscsid[750]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Feb 12 19:16:37.795013 iscsid[750]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Feb 12 19:16:37.795013 iscsid[750]: If using hardware iscsi like qla4xxx this message can be ignored.
Feb 12 19:16:37.795013 iscsid[750]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Feb 12 19:16:37.795013 iscsid[750]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Feb 12 19:16:37.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.797896 systemd[1]: Started iscsid.service.
Feb 12 19:16:37.802142 systemd[1]: Starting dracut-initqueue.service...
Feb 12 19:16:37.807164 systemd-networkd[744]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb 12 19:16:37.813107 systemd[1]: Finished dracut-initqueue.service.
Feb 12 19:16:37.814210 systemd[1]: Reached target remote-fs-pre.target.
Feb 12 19:16:37.815434 systemd[1]: Reached target remote-cryptsetup.target.
Feb 12 19:16:37.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.816808 systemd[1]: Reached target remote-fs.target.
Feb 12 19:16:37.818923 systemd[1]: Starting dracut-pre-mount.service...
Feb 12 19:16:37.826793 systemd[1]: Finished dracut-pre-mount.service.
Feb 12 19:16:37.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.844615 ignition[659]: parsing config with SHA512: fb764d9b080b65c5ca5dc8ecfee2108b0bf671bb9b41dbd660145c11f4dfdb40ab53a76c4de326ffc7a2c02cbbd99fa4e3335ad97893052e75c4b684aaea8663
Feb 12 19:16:37.879691 unknown[659]: fetched base config from "system"
Feb 12 19:16:37.880483 unknown[659]: fetched user config from "qemu"
Feb 12 19:16:37.881757 ignition[659]: fetch-offline: fetch-offline passed
Feb 12 19:16:37.881832 ignition[659]: Ignition finished successfully
Feb 12 19:16:37.884070 systemd[1]: Finished ignition-fetch-offline.service.
Feb 12 19:16:37.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.884846 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Feb 12 19:16:37.885570 systemd[1]: Starting ignition-kargs.service...
Feb 12 19:16:37.894031 ignition[765]: Ignition 2.14.0
Feb 12 19:16:37.894044 ignition[765]: Stage: kargs
Feb 12 19:16:37.894141 ignition[765]: no configs at "/usr/lib/ignition/base.d"
Feb 12 19:16:37.894150 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 12 19:16:37.895412 ignition[765]: kargs: kargs passed
Feb 12 19:16:37.895463 ignition[765]: Ignition finished successfully
Feb 12 19:16:37.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.897849 systemd[1]: Finished ignition-kargs.service.
Feb 12 19:16:37.899284 systemd[1]: Starting ignition-disks.service...
Feb 12 19:16:37.906214 ignition[771]: Ignition 2.14.0
Feb 12 19:16:37.906225 ignition[771]: Stage: disks
Feb 12 19:16:37.906329 ignition[771]: no configs at "/usr/lib/ignition/base.d"
Feb 12 19:16:37.906338 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 12 19:16:37.908672 systemd[1]: Finished ignition-disks.service.
Feb 12 19:16:37.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.907407 ignition[771]: disks: disks passed
Feb 12 19:16:37.910102 systemd[1]: Reached target initrd-root-device.target.
Feb 12 19:16:37.907453 ignition[771]: Ignition finished successfully
Feb 12 19:16:37.912177 systemd[1]: Reached target local-fs-pre.target.
Feb 12 19:16:37.916047 systemd[1]: Reached target local-fs.target.
Feb 12 19:16:37.917112 systemd[1]: Reached target sysinit.target.
Feb 12 19:16:37.918167 systemd[1]: Reached target basic.target.
Feb 12 19:16:37.920087 systemd[1]: Starting systemd-fsck-root.service...
Feb 12 19:16:37.931649 systemd-fsck[779]: ROOT: clean, 602/553520 files, 56014/553472 blocks
Feb 12 19:16:37.934668 systemd[1]: Finished systemd-fsck-root.service.
Feb 12 19:16:37.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.936359 systemd[1]: Mounting sysroot.mount...
Feb 12 19:16:37.941607 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb 12 19:16:37.941979 systemd[1]: Mounted sysroot.mount.
Feb 12 19:16:37.942717 systemd[1]: Reached target initrd-root-fs.target.
Feb 12 19:16:37.944956 systemd[1]: Mounting sysroot-usr.mount...
Feb 12 19:16:37.945800 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Feb 12 19:16:37.945837 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 12 19:16:37.945862 systemd[1]: Reached target ignition-diskful.target.
Feb 12 19:16:37.947825 systemd[1]: Mounted sysroot-usr.mount.
Feb 12 19:16:37.950288 systemd[1]: Starting initrd-setup-root.service...
Feb 12 19:16:37.954728 initrd-setup-root[789]: cut: /sysroot/etc/passwd: No such file or directory
Feb 12 19:16:37.958189 initrd-setup-root[797]: cut: /sysroot/etc/group: No such file or directory
Feb 12 19:16:37.962147 initrd-setup-root[805]: cut: /sysroot/etc/shadow: No such file or directory
Feb 12 19:16:37.965984 initrd-setup-root[813]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 12 19:16:37.995382 systemd[1]: Finished initrd-setup-root.service.
Feb 12 19:16:37.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:37.997093 systemd[1]: Starting ignition-mount.service...
Feb 12 19:16:37.998364 systemd[1]: Starting sysroot-boot.service...
Feb 12 19:16:38.003891 bash[831]: umount: /sysroot/usr/share/oem: not mounted.
Feb 12 19:16:38.012475 ignition[833]: INFO     : Ignition 2.14.0
Feb 12 19:16:38.012475 ignition[833]: INFO     : Stage: mount
Feb 12 19:16:38.014010 ignition[833]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 12 19:16:38.014010 ignition[833]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 12 19:16:38.014010 ignition[833]: INFO     : mount: mount passed
Feb 12 19:16:38.014010 ignition[833]: INFO     : Ignition finished successfully
Feb 12 19:16:38.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:38.015066 systemd[1]: Finished ignition-mount.service.
Feb 12 19:16:38.020145 systemd[1]: Finished sysroot-boot.service.
Feb 12 19:16:38.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:38.598757 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb 12 19:16:38.604810 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (841)
Feb 12 19:16:38.606041 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 12 19:16:38.606064 kernel: BTRFS info (device vda6): using free space tree
Feb 12 19:16:38.606074 kernel: BTRFS info (device vda6): has skinny extents
Feb 12 19:16:38.609324 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb 12 19:16:38.610871 systemd[1]: Starting ignition-files.service...
Feb 12 19:16:38.624421 ignition[861]: INFO     : Ignition 2.14.0
Feb 12 19:16:38.624421 ignition[861]: INFO     : Stage: files
Feb 12 19:16:38.625732 ignition[861]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 12 19:16:38.625732 ignition[861]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 12 19:16:38.625732 ignition[861]: DEBUG    : files: compiled without relabeling support, skipping
Feb 12 19:16:38.629803 ignition[861]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 12 19:16:38.629803 ignition[861]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 12 19:16:38.633752 ignition[861]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 12 19:16:38.635141 ignition[861]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 12 19:16:38.636863 unknown[861]: wrote ssh authorized keys file for user: core
Feb 12 19:16:38.638514 ignition[861]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 12 19:16:38.638514 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 12 19:16:38.638514 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Feb 12 19:16:38.692033 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Feb 12 19:16:38.726340 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 12 19:16:38.728224 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz"
Feb 12 19:16:38.728224 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1
Feb 12 19:16:39.089035 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb 12 19:16:39.275608 ignition[861]: DEBUG    : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742
Feb 12 19:16:39.278251 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz"
Feb 12 19:16:39.278251 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz"
Feb 12 19:16:39.278251 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1
Feb 12 19:16:39.492669 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET result: OK
Feb 12 19:16:39.611094 ignition[861]: DEBUG    : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c
Feb 12 19:16:39.613703 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz"
Feb 12 19:16:39.613703 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Feb 12 19:16:39.613703 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Feb 12 19:16:39.619389 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/opt/bin/kubeadm"
Feb 12 19:16:39.619389 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1
Feb 12 19:16:39.614840 systemd-networkd[744]: eth0: Gained IPv6LL
Feb 12 19:16:39.698535 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET result: OK
Feb 12 19:16:40.120187 ignition[861]: DEBUG    : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db
Feb 12 19:16:40.120187 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm"
Feb 12 19:16:40.124359 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/opt/bin/kubectl"
Feb 12 19:16:40.124359 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1
Feb 12 19:16:40.143882 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET result: OK
Feb 12 19:16:40.450488 ignition[861]: DEBUG    : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a
Feb 12 19:16:40.450488 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl"
Feb 12 19:16:40.453696 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/opt/bin/kubelet"
Feb 12 19:16:40.453696 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1
Feb 12 19:16:40.473904 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(9): GET result: OK
Feb 12 19:16:41.160751 ignition[861]: DEBUG    : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/etc/docker/daemon.json"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/home/core/install.sh"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: op(10): [started]  processing unit "containerd.service"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: op(10): op(11): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: op(10): op(11): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb 12 19:16:41.163123 ignition[861]: INFO     : files: op(10): [finished] processing unit "containerd.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(12): [started]  processing unit "prepare-cni-plugins.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(12): op(13): [started]  writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(12): [finished] processing unit "prepare-cni-plugins.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(14): [started]  processing unit "prepare-critools.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(14): op(15): [started]  writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(14): [finished] processing unit "prepare-critools.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(16): [started]  processing unit "prepare-helm.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(16): op(17): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(16): [finished] processing unit "prepare-helm.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(18): [started]  processing unit "coreos-metadata.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(18): op(19): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(18): op(19): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(18): [finished] processing unit "coreos-metadata.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(1a): [started]  setting preset to enabled for "prepare-critools.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(1a): [finished] setting preset to enabled for "prepare-critools.service"
Feb 12 19:16:41.186923 ignition[861]: INFO     : files: op(1b): [started]  setting preset to enabled for "prepare-helm.service"
Feb 12 19:16:41.209043 ignition[861]: INFO     : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service"
Feb 12 19:16:41.209043 ignition[861]: INFO     : files: op(1c): [started]  setting preset to disabled for "coreos-metadata.service"
Feb 12 19:16:41.209043 ignition[861]: INFO     : files: op(1c): op(1d): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Feb 12 19:16:41.218465 ignition[861]: INFO     : files: op(1c): op(1d): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Feb 12 19:16:41.220544 ignition[861]: INFO     : files: op(1c): [finished] setting preset to disabled for "coreos-metadata.service"
Feb 12 19:16:41.220544 ignition[861]: INFO     : files: op(1e): [started]  setting preset to enabled for "prepare-cni-plugins.service"
Feb 12 19:16:41.220544 ignition[861]: INFO     : files: op(1e): [finished] setting preset to enabled for "prepare-cni-plugins.service"
Feb 12 19:16:41.220544 ignition[861]: INFO     : files: createResultFile: createFiles: op(1f): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 12 19:16:41.220544 ignition[861]: INFO     : files: createResultFile: createFiles: op(1f): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 12 19:16:41.220544 ignition[861]: INFO     : files: files passed
Feb 12 19:16:41.220544 ignition[861]: INFO     : Ignition finished successfully
Feb 12 19:16:41.236071 kernel: kauditd_printk_skb: 23 callbacks suppressed
Feb 12 19:16:41.236094 kernel: audit: type=1130 audit(1707765401.222:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.236104 kernel: audit: type=1130 audit(1707765401.231:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.236114 kernel: audit: type=1131 audit(1707765401.231:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.220578 systemd[1]: Finished ignition-files.service.
Feb 12 19:16:41.224220 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Feb 12 19:16:41.227115 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Feb 12 19:16:41.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.241504 initrd-setup-root-after-ignition[886]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory
Feb 12 19:16:41.243120 kernel: audit: type=1130 audit(1707765401.238:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.227804 systemd[1]: Starting ignition-quench.service...
Feb 12 19:16:41.244230 initrd-setup-root-after-ignition[888]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 12 19:16:41.230385 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 12 19:16:41.230471 systemd[1]: Finished ignition-quench.service.
Feb 12 19:16:41.237513 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Feb 12 19:16:41.238576 systemd[1]: Reached target ignition-complete.target.
Feb 12 19:16:41.242904 systemd[1]: Starting initrd-parse-etc.service...
Feb 12 19:16:41.255469 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 12 19:16:41.255578 systemd[1]: Finished initrd-parse-etc.service.
Feb 12 19:16:41.260802 kernel: audit: type=1130 audit(1707765401.256:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.260823 kernel: audit: type=1131 audit(1707765401.256:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.257088 systemd[1]: Reached target initrd-fs.target.
Feb 12 19:16:41.261488 systemd[1]: Reached target initrd.target.
Feb 12 19:16:41.262444 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Feb 12 19:16:41.263229 systemd[1]: Starting dracut-pre-pivot.service...
Feb 12 19:16:41.273440 systemd[1]: Finished dracut-pre-pivot.service.
Feb 12 19:16:41.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.275101 systemd[1]: Starting initrd-cleanup.service...
Feb 12 19:16:41.277659 kernel: audit: type=1130 audit(1707765401.274:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.283230 systemd[1]: Stopped target network.target.
Feb 12 19:16:41.284072 systemd[1]: Stopped target nss-lookup.target.
Feb 12 19:16:41.285111 systemd[1]: Stopped target remote-cryptsetup.target.
Feb 12 19:16:41.286214 systemd[1]: Stopped target timers.target.
Feb 12 19:16:41.287310 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 12 19:16:41.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.287425 systemd[1]: Stopped dracut-pre-pivot.service.
Feb 12 19:16:41.291588 kernel: audit: type=1131 audit(1707765401.287:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.288461 systemd[1]: Stopped target initrd.target.
Feb 12 19:16:41.291234 systemd[1]: Stopped target basic.target.
Feb 12 19:16:41.292309 systemd[1]: Stopped target ignition-complete.target.
Feb 12 19:16:41.293467 systemd[1]: Stopped target ignition-diskful.target.
Feb 12 19:16:41.294445 systemd[1]: Stopped target initrd-root-device.target.
Feb 12 19:16:41.295714 systemd[1]: Stopped target remote-fs.target.
Feb 12 19:16:41.296856 systemd[1]: Stopped target remote-fs-pre.target.
Feb 12 19:16:41.298022 systemd[1]: Stopped target sysinit.target.
Feb 12 19:16:41.299036 systemd[1]: Stopped target local-fs.target.
Feb 12 19:16:41.300073 systemd[1]: Stopped target local-fs-pre.target.
Feb 12 19:16:41.301204 systemd[1]: Stopped target swap.target.
Feb 12 19:16:41.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.302227 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 12 19:16:41.306603 kernel: audit: type=1131 audit(1707765401.302:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.302339 systemd[1]: Stopped dracut-pre-mount.service.
Feb 12 19:16:41.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.303431 systemd[1]: Stopped target cryptsetup.target.
Feb 12 19:16:41.310437 kernel: audit: type=1131 audit(1707765401.307:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.306090 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 12 19:16:41.306193 systemd[1]: Stopped dracut-initqueue.service.
Feb 12 19:16:41.307277 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 12 19:16:41.307369 systemd[1]: Stopped ignition-fetch-offline.service.
Feb 12 19:16:41.310144 systemd[1]: Stopped target paths.target.
Feb 12 19:16:41.311118 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 12 19:16:41.316637 systemd[1]: Stopped systemd-ask-password-console.path.
Feb 12 19:16:41.317531 systemd[1]: Stopped target slices.target.
Feb 12 19:16:41.318549 systemd[1]: Stopped target sockets.target.
Feb 12 19:16:41.319695 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 12 19:16:41.319767 systemd[1]: Closed iscsid.socket.
Feb 12 19:16:41.320730 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 12 19:16:41.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.320799 systemd[1]: Closed iscsiuio.socket.
Feb 12 19:16:41.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.321842 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 12 19:16:41.321944 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Feb 12 19:16:41.323025 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 12 19:16:41.323115 systemd[1]: Stopped ignition-files.service.
Feb 12 19:16:41.325046 systemd[1]: Stopping ignition-mount.service...
Feb 12 19:16:41.326678 systemd[1]: Stopping sysroot-boot.service...
Feb 12 19:16:41.328078 systemd[1]: Stopping systemd-networkd.service...
Feb 12 19:16:41.332282 ignition[901]: INFO     : Ignition 2.14.0
Feb 12 19:16:41.332282 ignition[901]: INFO     : Stage: umount
Feb 12 19:16:41.332282 ignition[901]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 12 19:16:41.332282 ignition[901]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 12 19:16:41.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.329214 systemd[1]: Stopping systemd-resolved.service...
Feb 12 19:16:41.338864 ignition[901]: INFO     : umount: umount passed
Feb 12 19:16:41.338864 ignition[901]: INFO     : Ignition finished successfully
Feb 12 19:16:41.330230 systemd-networkd[744]: eth0: DHCPv6 lease lost
Feb 12 19:16:41.332290 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 12 19:16:41.332551 systemd[1]: Stopped systemd-udev-trigger.service.
Feb 12 19:16:41.335669 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 12 19:16:41.335775 systemd[1]: Stopped dracut-pre-trigger.service.
Feb 12 19:16:41.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.338929 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 12 19:16:41.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.339649 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 12 19:16:41.339737 systemd[1]: Stopped systemd-resolved.service.
Feb 12 19:16:41.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.341004 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 12 19:16:41.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.341093 systemd[1]: Stopped systemd-networkd.service.
Feb 12 19:16:41.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.342781 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 12 19:16:41.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.350000 audit: BPF prog-id=6 op=UNLOAD
Feb 12 19:16:41.350000 audit: BPF prog-id=9 op=UNLOAD
Feb 12 19:16:41.342851 systemd[1]: Stopped ignition-mount.service.
Feb 12 19:16:41.344277 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 12 19:16:41.344342 systemd[1]: Stopped sysroot-boot.service.
Feb 12 19:16:41.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.345430 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 12 19:16:41.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.345479 systemd[1]: Closed systemd-networkd.socket.
Feb 12 19:16:41.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.346147 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 12 19:16:41.346184 systemd[1]: Stopped ignition-disks.service.
Feb 12 19:16:41.347226 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 12 19:16:41.347261 systemd[1]: Stopped ignition-kargs.service.
Feb 12 19:16:41.348145 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 12 19:16:41.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.348178 systemd[1]: Stopped ignition-setup.service.
Feb 12 19:16:41.349214 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 12 19:16:41.349247 systemd[1]: Stopped initrd-setup-root.service.
Feb 12 19:16:41.351013 systemd[1]: Stopping network-cleanup.service...
Feb 12 19:16:41.351896 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 12 19:16:41.351945 systemd[1]: Stopped parse-ip-for-networkd.service.
Feb 12 19:16:41.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.353224 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 12 19:16:41.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.353262 systemd[1]: Stopped systemd-sysctl.service.
Feb 12 19:16:41.354858 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 12 19:16:41.354899 systemd[1]: Stopped systemd-modules-load.service.
Feb 12 19:16:41.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.355936 systemd[1]: Stopping systemd-udevd.service...
Feb 12 19:16:41.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.360053 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 12 19:16:41.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.360562 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 12 19:16:41.360739 systemd[1]: Finished initrd-cleanup.service.
Feb 12 19:16:41.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.364843 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 12 19:16:41.364924 systemd[1]: Stopped network-cleanup.service.
Feb 12 19:16:41.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.366230 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 12 19:16:41.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.366335 systemd[1]: Stopped systemd-udevd.service.
Feb 12 19:16:41.367467 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 12 19:16:41.367500 systemd[1]: Closed systemd-udevd-control.socket.
Feb 12 19:16:41.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:41.368309 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 12 19:16:41.368337 systemd[1]: Closed systemd-udevd-kernel.socket.
Feb 12 19:16:41.369232 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 12 19:16:41.369268 systemd[1]: Stopped dracut-pre-udev.service.
Feb 12 19:16:41.370377 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 12 19:16:41.370410 systemd[1]: Stopped dracut-cmdline.service.
Feb 12 19:16:41.371376 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 12 19:16:41.371410 systemd[1]: Stopped dracut-cmdline-ask.service.
Feb 12 19:16:41.373115 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Feb 12 19:16:41.373755 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 12 19:16:41.388000 audit: BPF prog-id=8 op=UNLOAD
Feb 12 19:16:41.388000 audit: BPF prog-id=7 op=UNLOAD
Feb 12 19:16:41.373809 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service.
Feb 12 19:16:41.375506 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 12 19:16:41.375547 systemd[1]: Stopped kmod-static-nodes.service.
Feb 12 19:16:41.377006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 12 19:16:41.377047 systemd[1]: Stopped systemd-vconsole-setup.service.
Feb 12 19:16:41.391000 audit: BPF prog-id=5 op=UNLOAD
Feb 12 19:16:41.391000 audit: BPF prog-id=4 op=UNLOAD
Feb 12 19:16:41.391000 audit: BPF prog-id=3 op=UNLOAD
Feb 12 19:16:41.378878 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb 12 19:16:41.379285 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 12 19:16:41.379369 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Feb 12 19:16:41.380958 systemd[1]: Reached target initrd-switch-root.target.
Feb 12 19:16:41.382776 systemd[1]: Starting initrd-switch-root.service...
Feb 12 19:16:41.388867 systemd[1]: Switching root.
Feb 12 19:16:41.401951 iscsid[750]: iscsid shutting down.
Feb 12 19:16:41.402438 systemd-journald[290]: Journal stopped
Feb 12 19:16:43.538996 systemd-journald[290]: Received SIGTERM from PID 1 (systemd).
Feb 12 19:16:43.539068 kernel: SELinux:  Class mctp_socket not defined in policy.
Feb 12 19:16:43.539085 kernel: SELinux:  Class anon_inode not defined in policy.
Feb 12 19:16:43.539095 kernel: SELinux: the above unknown classes and permissions will be allowed
Feb 12 19:16:43.539105 kernel: SELinux:  policy capability network_peer_controls=1
Feb 12 19:16:43.539119 kernel: SELinux:  policy capability open_perms=1
Feb 12 19:16:43.539129 kernel: SELinux:  policy capability extended_socket_class=1
Feb 12 19:16:43.539138 kernel: SELinux:  policy capability always_check_network=0
Feb 12 19:16:43.539147 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 12 19:16:43.539157 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 12 19:16:43.539167 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 12 19:16:43.539178 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 12 19:16:43.539188 systemd[1]: Successfully loaded SELinux policy in 32.588ms.
Feb 12 19:16:43.539207 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.230ms.
Feb 12 19:16:43.539219 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 12 19:16:43.539230 systemd[1]: Detected virtualization kvm.
Feb 12 19:16:43.539240 systemd[1]: Detected architecture arm64.
Feb 12 19:16:43.539251 systemd[1]: Detected first boot.
Feb 12 19:16:43.539263 systemd[1]: Initializing machine ID from VM UUID.
Feb 12 19:16:43.539274 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Feb 12 19:16:43.539284 systemd[1]: Populated /etc with preset unit settings.
Feb 12 19:16:43.539295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 19:16:43.539310 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 19:16:43.539322 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 19:16:43.539333 systemd[1]: Queued start job for default target multi-user.target.
Feb 12 19:16:43.539345 systemd[1]: Unnecessary job was removed for dev-vda6.device.
Feb 12 19:16:43.539356 systemd[1]: Created slice system-addon\x2dconfig.slice.
Feb 12 19:16:43.539366 systemd[1]: Created slice system-addon\x2drun.slice.
Feb 12 19:16:43.539377 systemd[1]: Created slice system-getty.slice.
Feb 12 19:16:43.539387 systemd[1]: Created slice system-modprobe.slice.
Feb 12 19:16:43.539398 systemd[1]: Created slice system-serial\x2dgetty.slice.
Feb 12 19:16:43.539409 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Feb 12 19:16:43.539420 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Feb 12 19:16:43.539431 systemd[1]: Created slice user.slice.
Feb 12 19:16:43.539443 systemd[1]: Started systemd-ask-password-console.path.
Feb 12 19:16:43.539453 systemd[1]: Started systemd-ask-password-wall.path.
Feb 12 19:16:43.539464 systemd[1]: Set up automount boot.automount.
Feb 12 19:16:43.539474 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Feb 12 19:16:43.539484 systemd[1]: Reached target integritysetup.target.
Feb 12 19:16:43.539494 systemd[1]: Reached target remote-cryptsetup.target.
Feb 12 19:16:43.539505 systemd[1]: Reached target remote-fs.target.
Feb 12 19:16:43.539515 systemd[1]: Reached target slices.target.
Feb 12 19:16:43.539527 systemd[1]: Reached target swap.target.
Feb 12 19:16:43.539538 systemd[1]: Reached target torcx.target.
Feb 12 19:16:43.539548 systemd[1]: Reached target veritysetup.target.
Feb 12 19:16:43.539564 systemd[1]: Listening on systemd-coredump.socket.
Feb 12 19:16:43.539575 systemd[1]: Listening on systemd-initctl.socket.
Feb 12 19:16:43.539585 systemd[1]: Listening on systemd-journald-audit.socket.
Feb 12 19:16:43.539603 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb 12 19:16:43.539615 systemd[1]: Listening on systemd-journald.socket.
Feb 12 19:16:43.539626 systemd[1]: Listening on systemd-networkd.socket.
Feb 12 19:16:43.539638 systemd[1]: Listening on systemd-udevd-control.socket.
Feb 12 19:16:43.539649 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb 12 19:16:43.539660 systemd[1]: Listening on systemd-userdbd.socket.
Feb 12 19:16:43.539670 systemd[1]: Mounting dev-hugepages.mount...
Feb 12 19:16:43.539681 systemd[1]: Mounting dev-mqueue.mount...
Feb 12 19:16:43.539691 systemd[1]: Mounting media.mount...
Feb 12 19:16:43.539701 systemd[1]: Mounting sys-kernel-debug.mount...
Feb 12 19:16:43.539712 systemd[1]: Mounting sys-kernel-tracing.mount...
Feb 12 19:16:43.539723 systemd[1]: Mounting tmp.mount...
Feb 12 19:16:43.539733 systemd[1]: Starting flatcar-tmpfiles.service...
Feb 12 19:16:43.539746 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Feb 12 19:16:43.539756 systemd[1]: Starting kmod-static-nodes.service...
Feb 12 19:16:43.539767 systemd[1]: Starting modprobe@configfs.service...
Feb 12 19:16:43.539777 systemd[1]: Starting modprobe@dm_mod.service...
Feb 12 19:16:43.539788 systemd[1]: Starting modprobe@drm.service...
Feb 12 19:16:43.539797 systemd[1]: Starting modprobe@efi_pstore.service...
Feb 12 19:16:43.539808 systemd[1]: Starting modprobe@fuse.service...
Feb 12 19:16:43.539818 systemd[1]: Starting modprobe@loop.service...
Feb 12 19:16:43.539828 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 12 19:16:43.539840 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Feb 12 19:16:43.539851 systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
Feb 12 19:16:43.539862 systemd[1]: Starting systemd-journald.service...
Feb 12 19:16:43.539872 systemd[1]: Starting systemd-modules-load.service...
Feb 12 19:16:43.539882 systemd[1]: Starting systemd-network-generator.service...
Feb 12 19:16:43.539892 systemd[1]: Starting systemd-remount-fs.service...
Feb 12 19:16:43.539903 systemd[1]: Starting systemd-udev-trigger.service...
Feb 12 19:16:43.539913 systemd[1]: Mounted dev-hugepages.mount.
Feb 12 19:16:43.539923 systemd[1]: Mounted dev-mqueue.mount.
Feb 12 19:16:43.539935 systemd[1]: Mounted media.mount.
Feb 12 19:16:43.539945 systemd[1]: Mounted sys-kernel-debug.mount.
Feb 12 19:16:43.539955 systemd[1]: Mounted sys-kernel-tracing.mount.
Feb 12 19:16:43.539965 systemd[1]: Mounted tmp.mount.
Feb 12 19:16:43.539976 systemd[1]: Finished kmod-static-nodes.service.
Feb 12 19:16:43.539986 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 12 19:16:43.539999 systemd-journald[1029]: Journal started
Feb 12 19:16:43.540043 systemd-journald[1029]: Runtime Journal (/run/log/journal/f0c9fe135708468aa4f764734dbd4195) is 6.0M, max 48.7M, 42.6M free.
Feb 12 19:16:43.459000 audit[1]: AVC avc:  denied  { audit_read } for  pid=1 comm="systemd" capability=37  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb 12 19:16:43.459000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1
Feb 12 19:16:43.532000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Feb 12 19:16:43.532000 audit[1029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffeb808e00 a2=4000 a3=1 items=0 ppid=1 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 19:16:43.532000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Feb 12 19:16:43.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.542472 systemd[1]: Finished modprobe@configfs.service.
Feb 12 19:16:43.542506 systemd[1]: Started systemd-journald.service.
Feb 12 19:16:43.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.544342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 12 19:16:43.544579 systemd[1]: Finished modprobe@dm_mod.service.
Feb 12 19:16:43.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.546105 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 12 19:16:43.546280 systemd[1]: Finished modprobe@drm.service.
Feb 12 19:16:43.546607 kernel: fuse: init (API version 7.34)
Feb 12 19:16:43.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.549588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 12 19:16:43.549900 systemd[1]: Finished modprobe@efi_pstore.service.
Feb 12 19:16:43.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.551672 kernel: loop: module loaded
Feb 12 19:16:43.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.551008 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 12 19:16:43.551212 systemd[1]: Finished modprobe@fuse.service.
Feb 12 19:16:43.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.552410 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 12 19:16:43.552797 systemd[1]: Finished modprobe@loop.service.
Feb 12 19:16:43.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.553990 systemd[1]: Finished systemd-modules-load.service.
Feb 12 19:16:43.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.555306 systemd[1]: Finished systemd-network-generator.service.
Feb 12 19:16:43.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.556687 systemd[1]: Finished systemd-remount-fs.service.
Feb 12 19:16:43.557843 systemd[1]: Reached target network-pre.target.
Feb 12 19:16:43.559798 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Feb 12 19:16:43.561881 systemd[1]: Mounting sys-kernel-config.mount...
Feb 12 19:16:43.562589 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 12 19:16:43.565033 systemd[1]: Starting systemd-hwdb-update.service...
Feb 12 19:16:43.566869 systemd[1]: Starting systemd-journal-flush.service...
Feb 12 19:16:43.567479 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 12 19:16:43.568552 systemd[1]: Starting systemd-random-seed.service...
Feb 12 19:16:43.569290 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Feb 12 19:16:43.570399 systemd[1]: Starting systemd-sysctl.service...
Feb 12 19:16:43.572383 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Feb 12 19:16:43.573330 systemd[1]: Mounted sys-kernel-config.mount.
Feb 12 19:16:43.579750 systemd-journald[1029]: Time spent on flushing to /var/log/journal/f0c9fe135708468aa4f764734dbd4195 is 25.127ms for 959 entries.
Feb 12 19:16:43.579750 systemd-journald[1029]: System Journal (/var/log/journal/f0c9fe135708468aa4f764734dbd4195) is 8.0M, max 195.6M, 187.6M free.
Feb 12 19:16:43.614260 systemd-journald[1029]: Received client request to flush runtime journal.
Feb 12 19:16:43.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.581394 systemd[1]: Finished systemd-random-seed.service.
Feb 12 19:16:43.582317 systemd[1]: Reached target first-boot-complete.target.
Feb 12 19:16:43.614831 udevadm[1087]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb 12 19:16:43.590335 systemd[1]: Finished flatcar-tmpfiles.service.
Feb 12 19:16:43.592366 systemd[1]: Starting systemd-sysusers.service...
Feb 12 19:16:43.595110 systemd[1]: Finished systemd-sysctl.service.
Feb 12 19:16:43.598162 systemd[1]: Finished systemd-udev-trigger.service.
Feb 12 19:16:43.600217 systemd[1]: Starting systemd-udev-settle.service...
Feb 12 19:16:43.611189 systemd[1]: Finished systemd-sysusers.service.
Feb 12 19:16:43.613176 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb 12 19:16:43.616086 systemd[1]: Finished systemd-journal-flush.service.
Feb 12 19:16:43.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.629529 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb 12 19:16:43.950451 systemd[1]: Finished systemd-hwdb-update.service.
Feb 12 19:16:43.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.952584 systemd[1]: Starting systemd-udevd.service...
Feb 12 19:16:43.970588 systemd-udevd[1095]: Using default interface naming scheme 'v252'.
Feb 12 19:16:43.982327 systemd[1]: Started systemd-udevd.service.
Feb 12 19:16:43.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:43.984975 systemd[1]: Starting systemd-networkd.service...
Feb 12 19:16:43.998871 systemd[1]: Starting systemd-userdbd.service...
Feb 12 19:16:44.007487 systemd[1]: Found device dev-ttyAMA0.device.
Feb 12 19:16:44.035105 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb 12 19:16:44.037316 systemd[1]: Started systemd-userdbd.service.
Feb 12 19:16:44.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.097780 systemd-networkd[1104]: lo: Link UP
Feb 12 19:16:44.097791 systemd-networkd[1104]: lo: Gained carrier
Feb 12 19:16:44.099966 systemd-networkd[1104]: Enumeration completed
Feb 12 19:16:44.100083 systemd[1]: Started systemd-networkd.service.
Feb 12 19:16:44.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.101065 systemd[1]: Finished systemd-udev-settle.service.
Feb 12 19:16:44.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.103157 systemd[1]: Starting lvm2-activation-early.service...
Feb 12 19:16:44.103815 systemd-networkd[1104]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 12 19:16:44.105540 systemd-networkd[1104]: eth0: Link UP
Feb 12 19:16:44.105548 systemd-networkd[1104]: eth0: Gained carrier
Feb 12 19:16:44.120977 lvm[1129]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 12 19:16:44.132739 systemd-networkd[1104]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb 12 19:16:44.151565 systemd[1]: Finished lvm2-activation-early.service.
Feb 12 19:16:44.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.152518 systemd[1]: Reached target cryptsetup.target.
Feb 12 19:16:44.154536 systemd[1]: Starting lvm2-activation.service...
Feb 12 19:16:44.158484 lvm[1131]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 12 19:16:44.207730 systemd[1]: Finished lvm2-activation.service.
Feb 12 19:16:44.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.208718 systemd[1]: Reached target local-fs-pre.target.
Feb 12 19:16:44.209503 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 12 19:16:44.209548 systemd[1]: Reached target local-fs.target.
Feb 12 19:16:44.210254 systemd[1]: Reached target machines.target.
Feb 12 19:16:44.212524 systemd[1]: Starting ldconfig.service...
Feb 12 19:16:44.213733 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Feb 12 19:16:44.213832 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 19:16:44.215105 systemd[1]: Starting systemd-boot-update.service...
Feb 12 19:16:44.217329 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Feb 12 19:16:44.219928 systemd[1]: Starting systemd-machine-id-commit.service...
Feb 12 19:16:44.220906 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met.
Feb 12 19:16:44.221043 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met.
Feb 12 19:16:44.222256 systemd[1]: Starting systemd-tmpfiles-setup.service...
Feb 12 19:16:44.232231 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1134 (bootctl)
Feb 12 19:16:44.233628 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Feb 12 19:16:44.235301 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Feb 12 19:16:44.236484 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 12 19:16:44.237680 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 12 19:16:44.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.239352 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Feb 12 19:16:44.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.312145 systemd[1]: Finished systemd-machine-id-commit.service.
Feb 12 19:16:44.337895 systemd-fsck[1143]: fsck.fat 4.2 (2021-01-31)
Feb 12 19:16:44.337895 systemd-fsck[1143]: /dev/vda1: 236 files, 113719/258078 clusters
Feb 12 19:16:44.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.340649 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Feb 12 19:16:44.411481 ldconfig[1133]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 12 19:16:44.415379 systemd[1]: Finished ldconfig.service.
Feb 12 19:16:44.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.521653 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 12 19:16:44.523296 systemd[1]: Mounting boot.mount...
Feb 12 19:16:44.531473 systemd[1]: Mounted boot.mount.
Feb 12 19:16:44.538696 systemd[1]: Finished systemd-boot-update.service.
Feb 12 19:16:44.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.593166 systemd[1]: Finished systemd-tmpfiles-setup.service.
Feb 12 19:16:44.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.595331 systemd[1]: Starting audit-rules.service...
Feb 12 19:16:44.597227 systemd[1]: Starting clean-ca-certificates.service...
Feb 12 19:16:44.599261 systemd[1]: Starting systemd-journal-catalog-update.service...
Feb 12 19:16:44.602079 systemd[1]: Starting systemd-resolved.service...
Feb 12 19:16:44.604854 systemd[1]: Starting systemd-timesyncd.service...
Feb 12 19:16:44.606796 systemd[1]: Starting systemd-update-utmp.service...
Feb 12 19:16:44.608321 systemd[1]: Finished clean-ca-certificates.service.
Feb 12 19:16:44.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.609638 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 12 19:16:44.621942 systemd[1]: Finished systemd-journal-catalog-update.service.
Feb 12 19:16:44.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.623000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.624379 systemd[1]: Starting systemd-update-done.service...
Feb 12 19:16:44.628718 systemd[1]: Finished systemd-update-utmp.service.
Feb 12 19:16:44.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 19:16:44.640382 augenrules[1177]: No rules
Feb 12 19:16:44.639000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb 12 19:16:44.639000 audit[1177]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc38f9bb0 a2=420 a3=0 items=0 ppid=1152 pid=1177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 19:16:44.639000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Feb 12 19:16:44.641501 systemd[1]: Finished systemd-update-done.service.
Feb 12 19:16:44.642808 systemd[1]: Finished audit-rules.service.
Feb 12 19:16:44.671779 systemd-resolved[1157]: Positive Trust Anchors:
Feb 12 19:16:44.671792 systemd-resolved[1157]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 12 19:16:44.671818 systemd-resolved[1157]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb 12 19:16:44.678202 systemd[1]: Started systemd-timesyncd.service.
Feb 12 19:16:44.678907 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Feb 12 19:16:44.679009 systemd-timesyncd[1158]: Initial clock synchronization to Mon 2024-02-12 19:16:44.394539 UTC.
Feb 12 19:16:44.679342 systemd[1]: Reached target time-set.target.
Feb 12 19:16:44.681156 systemd-resolved[1157]: Defaulting to hostname 'linux'.
Feb 12 19:16:44.682535 systemd[1]: Started systemd-resolved.service.
Feb 12 19:16:44.683239 systemd[1]: Reached target network.target.
Feb 12 19:16:44.683856 systemd[1]: Reached target nss-lookup.target.
Feb 12 19:16:44.684459 systemd[1]: Reached target sysinit.target.
Feb 12 19:16:44.685139 systemd[1]: Started motdgen.path.
Feb 12 19:16:44.685699 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Feb 12 19:16:44.686684 systemd[1]: Started logrotate.timer.
Feb 12 19:16:44.687335 systemd[1]: Started mdadm.timer.
Feb 12 19:16:44.687857 systemd[1]: Started systemd-tmpfiles-clean.timer.
Feb 12 19:16:44.688484 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 12 19:16:44.688512 systemd[1]: Reached target paths.target.
Feb 12 19:16:44.689087 systemd[1]: Reached target timers.target.
Feb 12 19:16:44.689955 systemd[1]: Listening on dbus.socket.
Feb 12 19:16:44.691698 systemd[1]: Starting docker.socket...
Feb 12 19:16:44.693338 systemd[1]: Listening on sshd.socket.
Feb 12 19:16:44.694176 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 19:16:44.694509 systemd[1]: Listening on docker.socket.
Feb 12 19:16:44.695205 systemd[1]: Reached target sockets.target.
Feb 12 19:16:44.695836 systemd[1]: Reached target basic.target.
Feb 12 19:16:44.696471 systemd[1]: System is tainted: cgroupsv1
Feb 12 19:16:44.696515 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb 12 19:16:44.696535 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb 12 19:16:44.697636 systemd[1]: Starting containerd.service...
Feb 12 19:16:44.699179 systemd[1]: Starting dbus.service...
Feb 12 19:16:44.700708 systemd[1]: Starting enable-oem-cloudinit.service...
Feb 12 19:16:44.702511 systemd[1]: Starting extend-filesystems.service...
Feb 12 19:16:44.703206 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Feb 12 19:16:44.704488 systemd[1]: Starting motdgen.service...
Feb 12 19:16:44.706324 systemd[1]: Starting prepare-cni-plugins.service...
Feb 12 19:16:44.708462 systemd[1]: Starting prepare-critools.service...
Feb 12 19:16:44.710198 systemd[1]: Starting prepare-helm.service...
Feb 12 19:16:44.713427 systemd[1]: Starting ssh-key-proc-cmdline.service...
Feb 12 19:16:44.715282 systemd[1]: Starting sshd-keygen.service...
Feb 12 19:16:44.718277 systemd[1]: Starting systemd-logind.service...
Feb 12 19:16:44.721857 jq[1190]: false
Feb 12 19:16:44.719105 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 19:16:44.719185 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 12 19:16:44.720492 systemd[1]: Starting update-engine.service...
Feb 12 19:16:44.723114 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Feb 12 19:16:44.733770 jq[1210]: true
Feb 12 19:16:44.738250 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 12 19:16:44.738486 systemd[1]: Finished ssh-key-proc-cmdline.service.
Feb 12 19:16:44.744827 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 12 19:16:44.745077 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Feb 12 19:16:44.751363 systemd[1]: motdgen.service: Deactivated successfully.
Feb 12 19:16:44.751625 systemd[1]: Finished motdgen.service.
Feb 12 19:16:44.752062 extend-filesystems[1191]: Found vda
Feb 12 19:16:44.752062 extend-filesystems[1191]: Found vda1
Feb 12 19:16:44.752062 extend-filesystems[1191]: Found vda2
Feb 12 19:16:44.752062 extend-filesystems[1191]: Found vda3
Feb 12 19:16:44.752062 extend-filesystems[1191]: Found usr
Feb 12 19:16:44.752062 extend-filesystems[1191]: Found vda4
Feb 12 19:16:44.752062 extend-filesystems[1191]: Found vda6
Feb 12 19:16:44.752062 extend-filesystems[1191]: Found vda7
Feb 12 19:16:44.752062 extend-filesystems[1191]: Found vda9
Feb 12 19:16:44.752062 extend-filesystems[1191]: Checking size of /dev/vda9
Feb 12 19:16:44.761336 tar[1213]: crictl
Feb 12 19:16:44.765231 jq[1222]: true
Feb 12 19:16:44.777720 tar[1212]: ./
Feb 12 19:16:44.777720 tar[1212]: ./macvlan
Feb 12 19:16:44.777996 tar[1214]: linux-arm64/helm
Feb 12 19:16:44.781519 dbus-daemon[1189]: [system] SELinux support is enabled
Feb 12 19:16:44.782140 systemd[1]: Started dbus.service.
Feb 12 19:16:44.784776 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 12 19:16:44.784811 systemd[1]: Reached target system-config.target.
Feb 12 19:16:44.785481 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 12 19:16:44.785494 systemd[1]: Reached target user-config.target.
Feb 12 19:16:44.803742 extend-filesystems[1191]: Resized partition /dev/vda9
Feb 12 19:16:44.807586 extend-filesystems[1254]: resize2fs 1.46.5 (30-Dec-2021)
Feb 12 19:16:44.818673 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Feb 12 19:16:44.852856 tar[1212]: ./static
Feb 12 19:16:44.868138 systemd-logind[1208]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 12 19:16:44.873372 systemd-logind[1208]: New seat seat0.
Feb 12 19:16:44.875692 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Feb 12 19:16:44.894190 update_engine[1209]: I0212 19:16:44.877122  1209 main.cc:92] Flatcar Update Engine starting
Feb 12 19:16:44.894190 update_engine[1209]: I0212 19:16:44.880959  1209 update_check_scheduler.cc:74] Next update check in 8m49s
Feb 12 19:16:44.894972 extend-filesystems[1254]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Feb 12 19:16:44.894972 extend-filesystems[1254]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 12 19:16:44.894972 extend-filesystems[1254]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Feb 12 19:16:44.879771 systemd[1]: Started update-engine.service.
Feb 12 19:16:44.901423 bash[1247]: Updated "/home/core/.ssh/authorized_keys"
Feb 12 19:16:44.901529 extend-filesystems[1191]: Resized filesystem in /dev/vda9
Feb 12 19:16:44.882649 systemd[1]: Started locksmithd.service.
Feb 12 19:16:44.883726 systemd[1]: Started systemd-logind.service.
Feb 12 19:16:44.896534 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Feb 12 19:16:44.898453 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 12 19:16:44.898703 systemd[1]: Finished extend-filesystems.service.
Feb 12 19:16:44.916762 tar[1212]: ./vlan
Feb 12 19:16:44.944079 tar[1212]: ./portmap
Feb 12 19:16:44.952729 env[1216]: time="2024-02-12T19:16:44.952650040Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Feb 12 19:16:44.971083 tar[1212]: ./host-local
Feb 12 19:16:44.987997 env[1216]: time="2024-02-12T19:16:44.987949120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 12 19:16:44.990250 env[1216]: time="2024-02-12T19:16:44.990173240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 12 19:16:44.995784 env[1216]: time="2024-02-12T19:16:44.995744960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 12 19:16:44.996045 env[1216]: time="2024-02-12T19:16:44.996011640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 12 19:16:44.996462 env[1216]: time="2024-02-12T19:16:44.996434640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 12 19:16:44.997004 env[1216]: time="2024-02-12T19:16:44.996982360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 12 19:16:44.997155 env[1216]: time="2024-02-12T19:16:44.997130400Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Feb 12 19:16:44.997321 env[1216]: time="2024-02-12T19:16:44.997304000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 12 19:16:44.997604 env[1216]: time="2024-02-12T19:16:44.997566320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 12 19:16:44.998275 env[1216]: time="2024-02-12T19:16:44.998253200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 12 19:16:44.998832 env[1216]: time="2024-02-12T19:16:44.998807640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 12 19:16:44.999024 env[1216]: time="2024-02-12T19:16:44.998959040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 12 19:16:44.999252 env[1216]: time="2024-02-12T19:16:44.999228000Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Feb 12 19:16:44.999436 env[1216]: time="2024-02-12T19:16:44.999419200Z" level=info msg="metadata content store policy set" policy=shared
Feb 12 19:16:44.999698 tar[1212]: ./vrf
Feb 12 19:16:45.004373 env[1216]: time="2024-02-12T19:16:45.004261145Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 12 19:16:45.004662 env[1216]: time="2024-02-12T19:16:45.004642070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 12 19:16:45.004921 env[1216]: time="2024-02-12T19:16:45.004901511Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 12 19:16:45.005223 env[1216]: time="2024-02-12T19:16:45.005156669Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 12 19:16:45.005573 env[1216]: time="2024-02-12T19:16:45.005448169Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 12 19:16:45.005828 env[1216]: time="2024-02-12T19:16:45.005809033Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 12 19:16:45.006058 env[1216]: time="2024-02-12T19:16:45.006040466Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 12 19:16:45.008551 env[1216]: time="2024-02-12T19:16:45.008513467Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 12 19:16:45.008746 env[1216]: time="2024-02-12T19:16:45.008560918Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Feb 12 19:16:45.008746 env[1216]: time="2024-02-12T19:16:45.008578047Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 12 19:16:45.008746 env[1216]: time="2024-02-12T19:16:45.008603972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 12 19:16:45.008746 env[1216]: time="2024-02-12T19:16:45.008617050Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 12 19:16:45.008841 env[1216]: time="2024-02-12T19:16:45.008757283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 12 19:16:45.008841 env[1216]: time="2024-02-12T19:16:45.008831586Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 12 19:16:45.009123 env[1216]: time="2024-02-12T19:16:45.009101674Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 12 19:16:45.009173 env[1216]: time="2024-02-12T19:16:45.009135122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009173 env[1216]: time="2024-02-12T19:16:45.009150630Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 12 19:16:45.009328 env[1216]: time="2024-02-12T19:16:45.009268449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009328 env[1216]: time="2024-02-12T19:16:45.009287661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009328 env[1216]: time="2024-02-12T19:16:45.009301550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009328 env[1216]: time="2024-02-12T19:16:45.009312660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009328 env[1216]: time="2024-02-12T19:16:45.009324349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009446 env[1216]: time="2024-02-12T19:16:45.009336695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009446 env[1216]: time="2024-02-12T19:16:45.009348422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009446 env[1216]: time="2024-02-12T19:16:45.009360266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009446 env[1216]: time="2024-02-12T19:16:45.009373653Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 12 19:16:45.009576 env[1216]: time="2024-02-12T19:16:45.009497721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009576 env[1216]: time="2024-02-12T19:16:45.009512921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009576 env[1216]: time="2024-02-12T19:16:45.009524649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009576 env[1216]: time="2024-02-12T19:16:45.009536840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 12 19:16:45.009576 env[1216]: time="2024-02-12T19:16:45.009550073Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Feb 12 19:16:45.009576 env[1216]: time="2024-02-12T19:16:45.009572487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 12 19:16:45.009716 env[1216]: time="2024-02-12T19:16:45.009589230Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Feb 12 19:16:45.009716 env[1216]: time="2024-02-12T19:16:45.009633711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 12 19:16:45.009961 env[1216]: time="2024-02-12T19:16:45.009900752Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.009962362Z" level=info msg="Connect containerd service"
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.009991334Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.010685131Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.010993566Z" level=info msg="Start subscribing containerd event"
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.011061850Z" level=info msg="Start recovering state"
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.011125544Z" level=info msg="Start event monitor"
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.011131678Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.011146415Z" level=info msg="Start snapshots syncer"
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.011189353Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.011241009Z" level=info msg="containerd successfully booted in 0.059987s"
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.011195062Z" level=info msg="Start cni network conf syncer for default"
Feb 12 19:16:45.012571 env[1216]: time="2024-02-12T19:16:45.012349641Z" level=info msg="Start streaming server"
Feb 12 19:16:45.011359 systemd[1]: Started containerd.service.
Feb 12 19:16:45.041301 tar[1212]: ./bridge
Feb 12 19:16:45.093108 tar[1212]: ./tuning
Feb 12 19:16:45.133878 tar[1212]: ./firewall
Feb 12 19:16:45.178022 tar[1212]: ./host-device
Feb 12 19:16:45.212887 tar[1212]: ./sbr
Feb 12 19:16:45.236691 tar[1212]: ./loopback
Feb 12 19:16:45.259802 tar[1212]: ./dhcp
Feb 12 19:16:45.274073 tar[1214]: linux-arm64/LICENSE
Feb 12 19:16:45.274203 tar[1214]: linux-arm64/README.md
Feb 12 19:16:45.282168 systemd[1]: Finished prepare-helm.service.
Feb 12 19:16:45.291007 systemd[1]: Finished prepare-critools.service.
Feb 12 19:16:45.306213 locksmithd[1256]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 12 19:16:45.310682 systemd-networkd[1104]: eth0: Gained IPv6LL
Feb 12 19:16:45.326612 tar[1212]: ./ptp
Feb 12 19:16:45.353698 tar[1212]: ./ipvlan
Feb 12 19:16:45.381017 tar[1212]: ./bandwidth
Feb 12 19:16:45.418881 systemd[1]: Finished prepare-cni-plugins.service.
Feb 12 19:16:46.646486 sshd_keygen[1227]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 12 19:16:46.663537 systemd[1]: Finished sshd-keygen.service.
Feb 12 19:16:46.665822 systemd[1]: Starting issuegen.service...
Feb 12 19:16:46.670359 systemd[1]: issuegen.service: Deactivated successfully.
Feb 12 19:16:46.670571 systemd[1]: Finished issuegen.service.
Feb 12 19:16:46.672907 systemd[1]: Starting systemd-user-sessions.service...
Feb 12 19:16:46.678422 systemd[1]: Finished systemd-user-sessions.service.
Feb 12 19:16:46.680527 systemd[1]: Started getty@tty1.service.
Feb 12 19:16:46.682415 systemd[1]: Started serial-getty@ttyAMA0.service.
Feb 12 19:16:46.683475 systemd[1]: Reached target getty.target.
Feb 12 19:16:46.684148 systemd[1]: Reached target multi-user.target.
Feb 12 19:16:46.686074 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Feb 12 19:16:46.692214 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 12 19:16:46.692426 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Feb 12 19:16:46.693471 systemd[1]: Startup finished in 6.488s (kernel) + 5.230s (userspace) = 11.718s.
Feb 12 19:16:47.814278 systemd[1]: Created slice system-sshd.slice.
Feb 12 19:16:47.815411 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:53392.service.
Feb 12 19:16:47.870367 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 53392 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:16:47.874155 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:16:47.882174 systemd[1]: Created slice user-500.slice.
Feb 12 19:16:47.883153 systemd[1]: Starting user-runtime-dir@500.service...
Feb 12 19:16:47.884841 systemd-logind[1208]: New session 1 of user core.
Feb 12 19:16:47.892081 systemd[1]: Finished user-runtime-dir@500.service.
Feb 12 19:16:47.893554 systemd[1]: Starting user@500.service...
Feb 12 19:16:47.896700 (systemd)[1304]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:16:47.961143 systemd[1304]: Queued start job for default target default.target.
Feb 12 19:16:47.961673 systemd[1304]: Reached target paths.target.
Feb 12 19:16:47.961792 systemd[1304]: Reached target sockets.target.
Feb 12 19:16:47.961862 systemd[1304]: Reached target timers.target.
Feb 12 19:16:47.961936 systemd[1304]: Reached target basic.target.
Feb 12 19:16:47.962039 systemd[1304]: Reached target default.target.
Feb 12 19:16:47.962125 systemd[1304]: Startup finished in 58ms.
Feb 12 19:16:47.962135 systemd[1]: Started user@500.service.
Feb 12 19:16:47.963904 systemd[1]: Started session-1.scope.
Feb 12 19:16:48.017385 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:53404.service.
Feb 12 19:16:48.077739 sshd[1313]: Accepted publickey for core from 10.0.0.1 port 53404 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:16:48.079027 sshd[1313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:16:48.083346 systemd[1]: Started session-2.scope.
Feb 12 19:16:48.083647 systemd-logind[1208]: New session 2 of user core.
Feb 12 19:16:48.148027 sshd[1313]: pam_unix(sshd:session): session closed for user core
Feb 12 19:16:48.150121 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:53410.service.
Feb 12 19:16:48.152506 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:53404.service: Deactivated successfully.
Feb 12 19:16:48.153426 systemd-logind[1208]: Session 2 logged out. Waiting for processes to exit.
Feb 12 19:16:48.153486 systemd[1]: session-2.scope: Deactivated successfully.
Feb 12 19:16:48.154143 systemd-logind[1208]: Removed session 2.
Feb 12 19:16:48.197782 sshd[1318]: Accepted publickey for core from 10.0.0.1 port 53410 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:16:48.198984 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:16:48.202155 systemd-logind[1208]: New session 3 of user core.
Feb 12 19:16:48.202928 systemd[1]: Started session-3.scope.
Feb 12 19:16:48.251779 sshd[1318]: pam_unix(sshd:session): session closed for user core
Feb 12 19:16:48.253459 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:53416.service.
Feb 12 19:16:48.256696 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:53410.service: Deactivated successfully.
Feb 12 19:16:48.257399 systemd[1]: session-3.scope: Deactivated successfully.
Feb 12 19:16:48.258226 systemd-logind[1208]: Session 3 logged out. Waiting for processes to exit.
Feb 12 19:16:48.259726 systemd-logind[1208]: Removed session 3.
Feb 12 19:16:48.298800 sshd[1325]: Accepted publickey for core from 10.0.0.1 port 53416 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:16:48.300231 sshd[1325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:16:48.305208 systemd[1]: Started session-4.scope.
Feb 12 19:16:48.305500 systemd-logind[1208]: New session 4 of user core.
Feb 12 19:16:48.360765 sshd[1325]: pam_unix(sshd:session): session closed for user core
Feb 12 19:16:48.363055 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:53432.service.
Feb 12 19:16:48.365637 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:53416.service: Deactivated successfully.
Feb 12 19:16:48.366912 systemd[1]: session-4.scope: Deactivated successfully.
Feb 12 19:16:48.367010 systemd-logind[1208]: Session 4 logged out. Waiting for processes to exit.
Feb 12 19:16:48.367993 systemd-logind[1208]: Removed session 4.
Feb 12 19:16:48.407950 sshd[1332]: Accepted publickey for core from 10.0.0.1 port 53432 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:16:48.409212 sshd[1332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:16:48.413235 systemd[1]: Started session-5.scope.
Feb 12 19:16:48.413818 systemd-logind[1208]: New session 5 of user core.
Feb 12 19:16:48.484751 sudo[1338]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 12 19:16:48.484968 sudo[1338]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Feb 12 19:16:49.143653 systemd[1]: Starting systemd-networkd-wait-online.service...
Feb 12 19:16:49.151317 systemd[1]: Finished systemd-networkd-wait-online.service.
Feb 12 19:16:49.151633 systemd[1]: Reached target network-online.target.
Feb 12 19:16:49.153062 systemd[1]: Starting docker.service...
Feb 12 19:16:49.235285 env[1357]: time="2024-02-12T19:16:49.235222956Z" level=info msg="Starting up"
Feb 12 19:16:49.236909 env[1357]: time="2024-02-12T19:16:49.236883353Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 12 19:16:49.236995 env[1357]: time="2024-02-12T19:16:49.236981191Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 12 19:16:49.237061 env[1357]: time="2024-02-12T19:16:49.237046365Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb 12 19:16:49.237113 env[1357]: time="2024-02-12T19:16:49.237099788Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 12 19:16:49.239060 env[1357]: time="2024-02-12T19:16:49.239036350Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 12 19:16:49.239060 env[1357]: time="2024-02-12T19:16:49.239057540Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 12 19:16:49.239153 env[1357]: time="2024-02-12T19:16:49.239071796Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb 12 19:16:49.239153 env[1357]: time="2024-02-12T19:16:49.239080648Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 12 19:16:49.465513 env[1357]: time="2024-02-12T19:16:49.465011572Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Feb 12 19:16:49.465513 env[1357]: time="2024-02-12T19:16:49.465040555Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Feb 12 19:16:49.465513 env[1357]: time="2024-02-12T19:16:49.465173566Z" level=info msg="Loading containers: start."
Feb 12 19:16:49.578618 kernel: Initializing XFRM netlink socket
Feb 12 19:16:49.601196 env[1357]: time="2024-02-12T19:16:49.601160377Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Feb 12 19:16:49.662622 systemd-networkd[1104]: docker0: Link UP
Feb 12 19:16:49.674347 env[1357]: time="2024-02-12T19:16:49.674310206Z" level=info msg="Loading containers: done."
Feb 12 19:16:49.698073 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3675627711-merged.mount: Deactivated successfully.
Feb 12 19:16:49.701634 env[1357]: time="2024-02-12T19:16:49.701579643Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb 12 19:16:49.701815 env[1357]: time="2024-02-12T19:16:49.701786992Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23
Feb 12 19:16:49.701909 env[1357]: time="2024-02-12T19:16:49.701888043Z" level=info msg="Daemon has completed initialization"
Feb 12 19:16:49.718806 systemd[1]: Started docker.service.
Feb 12 19:16:49.725533 env[1357]: time="2024-02-12T19:16:49.725478192Z" level=info msg="API listen on /run/docker.sock"
Feb 12 19:16:49.742612 systemd[1]: Reloading.
Feb 12 19:16:49.786796 /usr/lib/systemd/system-generators/torcx-generator[1500]: time="2024-02-12T19:16:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 19:16:49.786824 /usr/lib/systemd/system-generators/torcx-generator[1500]: time="2024-02-12T19:16:49Z" level=info msg="torcx already run"
Feb 12 19:16:49.853361 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 19:16:49.853382 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 19:16:49.871516 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 19:16:49.927778 systemd[1]: Started kubelet.service.
Feb 12 19:16:50.107679 kubelet[1543]: E0212 19:16:50.107523    1543 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set"
Feb 12 19:16:50.110622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 12 19:16:50.110788 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 12 19:16:50.321184 env[1216]: time="2024-02-12T19:16:50.320895900Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\""
Feb 12 19:16:50.964445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2618974523.mount: Deactivated successfully.
Feb 12 19:16:52.361236 env[1216]: time="2024-02-12T19:16:52.361187636Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:52.363940 env[1216]: time="2024-02-12T19:16:52.363901626Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:52.365534 env[1216]: time="2024-02-12T19:16:52.365511444Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:52.367126 env[1216]: time="2024-02-12T19:16:52.367092311Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:52.368731 env[1216]: time="2024-02-12T19:16:52.368698144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\""
Feb 12 19:16:52.379027 env[1216]: time="2024-02-12T19:16:52.378983423Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\""
Feb 12 19:16:53.899708 env[1216]: time="2024-02-12T19:16:53.899652138Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:53.903163 env[1216]: time="2024-02-12T19:16:53.903120899Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:53.905760 env[1216]: time="2024-02-12T19:16:53.905728426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:53.908160 env[1216]: time="2024-02-12T19:16:53.908127135Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:53.908981 env[1216]: time="2024-02-12T19:16:53.908954112Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\""
Feb 12 19:16:53.920835 env[1216]: time="2024-02-12T19:16:53.920799091Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\""
Feb 12 19:16:55.116547 env[1216]: time="2024-02-12T19:16:55.116501557Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:55.120164 env[1216]: time="2024-02-12T19:16:55.120113551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:55.123090 env[1216]: time="2024-02-12T19:16:55.123058638Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:55.124721 env[1216]: time="2024-02-12T19:16:55.124692896Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:55.125464 env[1216]: time="2024-02-12T19:16:55.125434537Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\""
Feb 12 19:16:55.134180 env[1216]: time="2024-02-12T19:16:55.134135546Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\""
Feb 12 19:16:56.190886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4172205764.mount: Deactivated successfully.
Feb 12 19:16:56.525478 env[1216]: time="2024-02-12T19:16:56.525358852Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:56.526869 env[1216]: time="2024-02-12T19:16:56.526834680Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:56.528824 env[1216]: time="2024-02-12T19:16:56.528789993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:56.530010 env[1216]: time="2024-02-12T19:16:56.529967998Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:56.531280 env[1216]: time="2024-02-12T19:16:56.531209954Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\""
Feb 12 19:16:56.540858 env[1216]: time="2024-02-12T19:16:56.540822038Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Feb 12 19:16:56.996541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943359243.mount: Deactivated successfully.
Feb 12 19:16:57.006533 env[1216]: time="2024-02-12T19:16:57.006488472Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:57.008141 env[1216]: time="2024-02-12T19:16:57.008093143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:57.011064 env[1216]: time="2024-02-12T19:16:57.011023458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:57.013218 env[1216]: time="2024-02-12T19:16:57.013176099Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:57.013932 env[1216]: time="2024-02-12T19:16:57.013898770Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\""
Feb 12 19:16:57.027132 env[1216]: time="2024-02-12T19:16:57.027095313Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\""
Feb 12 19:16:58.069909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3000818054.mount: Deactivated successfully.
Feb 12 19:16:59.934340 env[1216]: time="2024-02-12T19:16:59.934283842Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:59.935734 env[1216]: time="2024-02-12T19:16:59.935688821Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:59.937298 env[1216]: time="2024-02-12T19:16:59.937266567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:59.939836 env[1216]: time="2024-02-12T19:16:59.939801758Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:16:59.940273 env[1216]: time="2024-02-12T19:16:59.940245592Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\""
Feb 12 19:16:59.950365 env[1216]: time="2024-02-12T19:16:59.950325848Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\""
Feb 12 19:17:00.361664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 12 19:17:00.361831 systemd[1]: Stopped kubelet.service.
Feb 12 19:17:00.364473 systemd[1]: Started kubelet.service.
Feb 12 19:17:00.429040 kubelet[1606]: E0212 19:17:00.428961    1606 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set"
Feb 12 19:17:00.431989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 12 19:17:00.432154 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 12 19:17:00.468338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299845943.mount: Deactivated successfully.
Feb 12 19:17:00.897051 env[1216]: time="2024-02-12T19:17:00.897002973Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:00.901498 env[1216]: time="2024-02-12T19:17:00.901455162Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:00.906042 env[1216]: time="2024-02-12T19:17:00.905996879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:00.907750 env[1216]: time="2024-02-12T19:17:00.907715399Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:00.908343 env[1216]: time="2024-02-12T19:17:00.908312681Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\""
Feb 12 19:17:05.813544 systemd[1]: Stopped kubelet.service.
Feb 12 19:17:05.829909 systemd[1]: Reloading.
Feb 12 19:17:05.887246 /usr/lib/systemd/system-generators/torcx-generator[1701]: time="2024-02-12T19:17:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 19:17:05.887644 /usr/lib/systemd/system-generators/torcx-generator[1701]: time="2024-02-12T19:17:05Z" level=info msg="torcx already run"
Feb 12 19:17:06.014356 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 19:17:06.014378 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 19:17:06.031427 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 19:17:06.094279 systemd[1]: Started kubelet.service.
Feb 12 19:17:06.134001 kubelet[1745]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb 12 19:17:06.134001 kubelet[1745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 19:17:06.134381 kubelet[1745]: I0212 19:17:06.134104    1745 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 12 19:17:06.135621 kubelet[1745]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb 12 19:17:06.135621 kubelet[1745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 19:17:06.809607 kubelet[1745]: I0212 19:17:06.809559    1745 server.go:412] "Kubelet version" kubeletVersion="v1.26.5"
Feb 12 19:17:06.809607 kubelet[1745]: I0212 19:17:06.809586    1745 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 12 19:17:06.809801 kubelet[1745]: I0212 19:17:06.809787    1745 server.go:836] "Client rotation is on, will bootstrap in background"
Feb 12 19:17:06.814234 kubelet[1745]: I0212 19:17:06.814206    1745 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 12 19:17:06.814556 kubelet[1745]: E0212 19:17:06.814269    1745 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:06.815746 kubelet[1745]: W0212 19:17:06.815722    1745 machine.go:65] Cannot read vendor id correctly, set empty.
Feb 12 19:17:06.816521 kubelet[1745]: I0212 19:17:06.816502    1745 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 12 19:17:06.816906 kubelet[1745]: I0212 19:17:06.816885    1745 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 12 19:17:06.816967 kubelet[1745]: I0212 19:17:06.816956    1745 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
Feb 12 19:17:06.817047 kubelet[1745]: I0212 19:17:06.817036    1745 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Feb 12 19:17:06.817047 kubelet[1745]: I0212 19:17:06.817046    1745 container_manager_linux.go:308] "Creating device plugin manager"
Feb 12 19:17:06.817236 kubelet[1745]: I0212 19:17:06.817212    1745 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 19:17:06.822435 kubelet[1745]: I0212 19:17:06.822407    1745 kubelet.go:398] "Attempting to sync node with API server"
Feb 12 19:17:06.822435 kubelet[1745]: I0212 19:17:06.822431    1745 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 12 19:17:06.822585 kubelet[1745]: I0212 19:17:06.822575    1745 kubelet.go:297] "Adding apiserver pod source"
Feb 12 19:17:06.823471 kubelet[1745]: W0212 19:17:06.823430    1745 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:06.823578 kubelet[1745]: E0212 19:17:06.823565    1745 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:06.825656 kubelet[1745]: I0212 19:17:06.825625    1745 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 12 19:17:06.826249 kubelet[1745]: W0212 19:17:06.826167    1745 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:06.826249 kubelet[1745]: E0212 19:17:06.826228    1745 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:06.826437 kubelet[1745]: I0212 19:17:06.826423    1745 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb 12 19:17:06.827405 kubelet[1745]: W0212 19:17:06.827372    1745 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 12 19:17:06.827877 kubelet[1745]: I0212 19:17:06.827851    1745 server.go:1186] "Started kubelet"
Feb 12 19:17:06.828018 kubelet[1745]: I0212 19:17:06.827984    1745 server.go:161] "Starting to listen" address="0.0.0.0" port=10250
Feb 12 19:17:06.828797 kubelet[1745]: I0212 19:17:06.828765    1745 server.go:451] "Adding debug handlers to kubelet server"
Feb 12 19:17:06.829283 kubelet[1745]: E0212 19:17:06.829183    1745 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b3339b89c40e22", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 6, 827824674, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 6, 827824674, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.59:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.59:6443: connect: connection refused'(may retry after sleeping)
Feb 12 19:17:06.829372 kubelet[1745]: E0212 19:17:06.829358    1745 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb 12 19:17:06.829403 kubelet[1745]: E0212 19:17:06.829380    1745 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 12 19:17:06.829922 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Feb 12 19:17:06.830050 kubelet[1745]: I0212 19:17:06.830025    1745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 12 19:17:06.830367 kubelet[1745]: I0212 19:17:06.830342    1745 volume_manager.go:293] "Starting Kubelet Volume Manager"
Feb 12 19:17:06.830442 kubelet[1745]: E0212 19:17:06.830428    1745 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 12 19:17:06.830971 kubelet[1745]: E0212 19:17:06.830929    1745 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:06.831031 kubelet[1745]: I0212 19:17:06.830981    1745 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 12 19:17:06.831679 kubelet[1745]: W0212 19:17:06.831642    1745 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:06.831796 kubelet[1745]: E0212 19:17:06.831780    1745 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:06.863620 kubelet[1745]: I0212 19:17:06.863578    1745 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 12 19:17:06.863790 kubelet[1745]: I0212 19:17:06.863781    1745 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 12 19:17:06.863857 kubelet[1745]: I0212 19:17:06.863848    1745 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 19:17:06.869770 kubelet[1745]: I0212 19:17:06.869744    1745 policy_none.go:49] "None policy: Start"
Feb 12 19:17:06.870438 kubelet[1745]: I0212 19:17:06.870420    1745 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb 12 19:17:06.870513 kubelet[1745]: I0212 19:17:06.870445    1745 state_mem.go:35] "Initializing new in-memory state store"
Feb 12 19:17:06.874806 kubelet[1745]: I0212 19:17:06.874784    1745 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Feb 12 19:17:06.875867 kubelet[1745]: I0212 19:17:06.875837    1745 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 12 19:17:06.876063 kubelet[1745]: I0212 19:17:06.876042    1745 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 12 19:17:06.877506 kubelet[1745]: E0212 19:17:06.877475    1745 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Feb 12 19:17:06.894153 kubelet[1745]: I0212 19:17:06.894123    1745 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Feb 12 19:17:06.894338 kubelet[1745]: I0212 19:17:06.894324    1745 status_manager.go:176] "Starting to sync pod status with apiserver"
Feb 12 19:17:06.894415 kubelet[1745]: I0212 19:17:06.894405    1745 kubelet.go:2113] "Starting kubelet main sync loop"
Feb 12 19:17:06.894512 kubelet[1745]: E0212 19:17:06.894502    1745 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb 12 19:17:06.895038 kubelet[1745]: W0212 19:17:06.894997    1745 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:06.895163 kubelet[1745]: E0212 19:17:06.895149    1745 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:06.931665 kubelet[1745]: I0212 19:17:06.931635    1745 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Feb 12 19:17:06.932075 kubelet[1745]: E0212 19:17:06.932058    1745 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost"
Feb 12 19:17:06.995191 kubelet[1745]: I0212 19:17:06.995166    1745 topology_manager.go:210] "Topology Admit Handler"
Feb 12 19:17:06.996481 kubelet[1745]: I0212 19:17:06.996454    1745 topology_manager.go:210] "Topology Admit Handler"
Feb 12 19:17:06.999554 kubelet[1745]: I0212 19:17:06.999522    1745 topology_manager.go:210] "Topology Admit Handler"
Feb 12 19:17:07.000474 kubelet[1745]: I0212 19:17:07.000453    1745 status_manager.go:698] "Failed to get status for pod" podUID=18be35aab02d8fe0bebd95f4ebe2d6bb pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.59:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.59:6443: connect: connection refused"
Feb 12 19:17:07.001047 kubelet[1745]: I0212 19:17:07.001030    1745 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.59:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.59:6443: connect: connection refused"
Feb 12 19:17:07.001744 kubelet[1745]: I0212 19:17:07.001727    1745 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.59:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.59:6443: connect: connection refused"
Feb 12 19:17:07.032193 kubelet[1745]: E0212 19:17:07.032136    1745 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:07.032470 kubelet[1745]: I0212 19:17:07.032455    1745 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:07.032623 kubelet[1745]: I0212 19:17:07.032587    1745 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:07.032696 kubelet[1745]: I0212 19:17:07.032678    1745 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:07.032746 kubelet[1745]: I0212 19:17:07.032732    1745 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:07.032880 kubelet[1745]: I0212 19:17:07.032855    1745 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:07.032982 kubelet[1745]: I0212 19:17:07.032971    1745 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost"
Feb 12 19:17:07.133418 kubelet[1745]: I0212 19:17:07.133305    1745 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18be35aab02d8fe0bebd95f4ebe2d6bb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"18be35aab02d8fe0bebd95f4ebe2d6bb\") " pod="kube-system/kube-apiserver-localhost"
Feb 12 19:17:07.133606 kubelet[1745]: I0212 19:17:07.133578    1745 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18be35aab02d8fe0bebd95f4ebe2d6bb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"18be35aab02d8fe0bebd95f4ebe2d6bb\") " pod="kube-system/kube-apiserver-localhost"
Feb 12 19:17:07.133712 kubelet[1745]: I0212 19:17:07.133701    1745 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18be35aab02d8fe0bebd95f4ebe2d6bb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"18be35aab02d8fe0bebd95f4ebe2d6bb\") " pod="kube-system/kube-apiserver-localhost"
Feb 12 19:17:07.134780 kubelet[1745]: I0212 19:17:07.134757    1745 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Feb 12 19:17:07.135381 kubelet[1745]: E0212 19:17:07.135365    1745 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost"
Feb 12 19:17:07.300272 kubelet[1745]: E0212 19:17:07.300236    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:07.300920 env[1216]: time="2024-02-12T19:17:07.300872827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:18be35aab02d8fe0bebd95f4ebe2d6bb,Namespace:kube-system,Attempt:0,}"
Feb 12 19:17:07.305064 kubelet[1745]: E0212 19:17:07.305047    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:07.305460 kubelet[1745]: E0212 19:17:07.305448    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:07.305564 env[1216]: time="2024-02-12T19:17:07.305530520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}"
Feb 12 19:17:07.305919 env[1216]: time="2024-02-12T19:17:07.305870997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}"
Feb 12 19:17:07.433090 kubelet[1745]: E0212 19:17:07.432989    1745 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:07.537310 kubelet[1745]: I0212 19:17:07.537286    1745 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Feb 12 19:17:07.537607 kubelet[1745]: E0212 19:17:07.537582    1745 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost"
Feb 12 19:17:07.634408 kubelet[1745]: W0212 19:17:07.634358    1745 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:07.634560 kubelet[1745]: E0212 19:17:07.634550    1745 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:07.913643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024824151.mount: Deactivated successfully.
Feb 12 19:17:07.918654 env[1216]: time="2024-02-12T19:17:07.918612997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.920317 env[1216]: time="2024-02-12T19:17:07.920279693Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.923133 env[1216]: time="2024-02-12T19:17:07.923092546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.923920 env[1216]: time="2024-02-12T19:17:07.923897787Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.927287 env[1216]: time="2024-02-12T19:17:07.927248905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.929134 env[1216]: time="2024-02-12T19:17:07.929106760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.931084 env[1216]: time="2024-02-12T19:17:07.931050054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.931869 env[1216]: time="2024-02-12T19:17:07.931829743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.934443 kubelet[1745]: W0212 19:17:07.934388    1745 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:07.934517 kubelet[1745]: E0212 19:17:07.934452    1745 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:07.935606 env[1216]: time="2024-02-12T19:17:07.935549206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.940314 env[1216]: time="2024-02-12T19:17:07.940272097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.942130 env[1216]: time="2024-02-12T19:17:07.942099050Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.944196 env[1216]: time="2024-02-12T19:17:07.944164234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:07.971423 env[1216]: time="2024-02-12T19:17:07.971326791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 19:17:07.971423 env[1216]: time="2024-02-12T19:17:07.971365957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 19:17:07.971423 env[1216]: time="2024-02-12T19:17:07.971375978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 19:17:07.971852 env[1216]: time="2024-02-12T19:17:07.971740370Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bed2d490960658a58e8002ecc0f2b83f9f4c659cdccefadfbb404b9928e7166b pid=1836 runtime=io.containerd.runc.v2
Feb 12 19:17:07.971928 env[1216]: time="2024-02-12T19:17:07.971881744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 19:17:07.971928 env[1216]: time="2024-02-12T19:17:07.971911807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 19:17:07.972030 env[1216]: time="2024-02-12T19:17:07.971922068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 19:17:07.973397 env[1216]: time="2024-02-12T19:17:07.973354166Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dddf1bba81637390769195175f717ac82cdf9e749ea72cb98865ae3b156f093 pid=1835 runtime=io.containerd.runc.v2
Feb 12 19:17:07.974648 env[1216]: time="2024-02-12T19:17:07.974578616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 19:17:07.974739 env[1216]: time="2024-02-12T19:17:07.974690964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 19:17:07.974739 env[1216]: time="2024-02-12T19:17:07.974713482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 19:17:07.975013 env[1216]: time="2024-02-12T19:17:07.974978302Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/843e82d89f93eac1cc190744f97d0cec41df730c2957218b42ae774aa54ca5f9 pid=1854 runtime=io.containerd.runc.v2
Feb 12 19:17:08.046995 env[1216]: time="2024-02-12T19:17:08.046952605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bed2d490960658a58e8002ecc0f2b83f9f4c659cdccefadfbb404b9928e7166b\""
Feb 12 19:17:08.047312 env[1216]: time="2024-02-12T19:17:08.047283100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"843e82d89f93eac1cc190744f97d0cec41df730c2957218b42ae774aa54ca5f9\""
Feb 12 19:17:08.047721 env[1216]: time="2024-02-12T19:17:08.047687433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:18be35aab02d8fe0bebd95f4ebe2d6bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dddf1bba81637390769195175f717ac82cdf9e749ea72cb98865ae3b156f093\""
Feb 12 19:17:08.050584 kubelet[1745]: E0212 19:17:08.050537    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:08.050709 kubelet[1745]: E0212 19:17:08.050589    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:08.051789 kubelet[1745]: E0212 19:17:08.051765    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:08.053714 env[1216]: time="2024-02-12T19:17:08.053670918Z" level=info msg="CreateContainer within sandbox \"bed2d490960658a58e8002ecc0f2b83f9f4c659cdccefadfbb404b9928e7166b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb 12 19:17:08.054130 env[1216]: time="2024-02-12T19:17:08.054096895Z" level=info msg="CreateContainer within sandbox \"9dddf1bba81637390769195175f717ac82cdf9e749ea72cb98865ae3b156f093\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb 12 19:17:08.054727 env[1216]: time="2024-02-12T19:17:08.054690315Z" level=info msg="CreateContainer within sandbox \"843e82d89f93eac1cc190744f97d0cec41df730c2957218b42ae774aa54ca5f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb 12 19:17:08.072051 env[1216]: time="2024-02-12T19:17:08.072002984Z" level=info msg="CreateContainer within sandbox \"843e82d89f93eac1cc190744f97d0cec41df730c2957218b42ae774aa54ca5f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"625de9c3f72dc19a2e95f57104ac6b8af227baa92b886f18adb73a4eb630ce6b\""
Feb 12 19:17:08.073068 env[1216]: time="2024-02-12T19:17:08.073024698Z" level=info msg="StartContainer for \"625de9c3f72dc19a2e95f57104ac6b8af227baa92b886f18adb73a4eb630ce6b\""
Feb 12 19:17:08.074175 env[1216]: time="2024-02-12T19:17:08.074149881Z" level=info msg="CreateContainer within sandbox \"bed2d490960658a58e8002ecc0f2b83f9f4c659cdccefadfbb404b9928e7166b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"31e52ca483090443fca8a579af394a593529d8648986e9423ef4964a09330b56\""
Feb 12 19:17:08.074739 env[1216]: time="2024-02-12T19:17:08.074717664Z" level=info msg="StartContainer for \"31e52ca483090443fca8a579af394a593529d8648986e9423ef4964a09330b56\""
Feb 12 19:17:08.075443 env[1216]: time="2024-02-12T19:17:08.075410880Z" level=info msg="CreateContainer within sandbox \"9dddf1bba81637390769195175f717ac82cdf9e749ea72cb98865ae3b156f093\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"621c2fdd2b583cf2cbbb582e40f3a462d83b657586cdd9d8f76eb627e456f6ff\""
Feb 12 19:17:08.075801 env[1216]: time="2024-02-12T19:17:08.075778154Z" level=info msg="StartContainer for \"621c2fdd2b583cf2cbbb582e40f3a462d83b657586cdd9d8f76eb627e456f6ff\""
Feb 12 19:17:08.171048 env[1216]: time="2024-02-12T19:17:08.168348662Z" level=info msg="StartContainer for \"31e52ca483090443fca8a579af394a593529d8648986e9423ef4964a09330b56\" returns successfully"
Feb 12 19:17:08.171048 env[1216]: time="2024-02-12T19:17:08.168770965Z" level=info msg="StartContainer for \"621c2fdd2b583cf2cbbb582e40f3a462d83b657586cdd9d8f76eb627e456f6ff\" returns successfully"
Feb 12 19:17:08.216833 kubelet[1745]: W0212 19:17:08.216774    1745 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:08.217214 kubelet[1745]: E0212 19:17:08.217199    1745 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:08.217543 env[1216]: time="2024-02-12T19:17:08.217503420Z" level=info msg="StartContainer for \"625de9c3f72dc19a2e95f57104ac6b8af227baa92b886f18adb73a4eb630ce6b\" returns successfully"
Feb 12 19:17:08.235940 kubelet[1745]: E0212 19:17:08.233639    1745 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:08.339639 kubelet[1745]: I0212 19:17:08.339286    1745 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Feb 12 19:17:08.339639 kubelet[1745]: E0212 19:17:08.339612    1745 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost"
Feb 12 19:17:08.402532 kubelet[1745]: W0212 19:17:08.402464    1745 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:08.402532 kubelet[1745]: E0212 19:17:08.402529    1745 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused
Feb 12 19:17:08.899688 kubelet[1745]: E0212 19:17:08.899658    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:08.902145 kubelet[1745]: E0212 19:17:08.902125    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:08.909465 kubelet[1745]: E0212 19:17:08.909432    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:09.906056 kubelet[1745]: E0212 19:17:09.906018    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:09.906613 kubelet[1745]: E0212 19:17:09.906572    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:09.906727 kubelet[1745]: E0212 19:17:09.906711    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:09.941195 kubelet[1745]: I0212 19:17:09.941171    1745 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Feb 12 19:17:10.530542 kubelet[1745]: E0212 19:17:10.530511    1745 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Feb 12 19:17:10.551850 kubelet[1745]: I0212 19:17:10.551798    1745 kubelet_node_status.go:73] "Successfully registered node" node="localhost"
Feb 12 19:17:10.828240 kubelet[1745]: I0212 19:17:10.828142    1745 apiserver.go:52] "Watching apiserver"
Feb 12 19:17:10.831152 kubelet[1745]: I0212 19:17:10.831122    1745 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 12 19:17:10.858991 kubelet[1745]: I0212 19:17:10.858951    1745 reconciler.go:41] "Reconciler: start to sync state"
Feb 12 19:17:11.028074 kubelet[1745]: E0212 19:17:11.028042    1745 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost"
Feb 12 19:17:11.028719 kubelet[1745]: E0212 19:17:11.028703    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:11.226834 kubelet[1745]: E0212 19:17:11.226456    1745 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:11.226990 kubelet[1745]: E0212 19:17:11.226893    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:11.426091 kubelet[1745]: E0212 19:17:11.426057    1745 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost"
Feb 12 19:17:11.426548 kubelet[1745]: E0212 19:17:11.426528    1745 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:13.175105 systemd[1]: Reloading.
Feb 12 19:17:13.221639 /usr/lib/systemd/system-generators/torcx-generator[2079]: time="2024-02-12T19:17:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 19:17:13.221669 /usr/lib/systemd/system-generators/torcx-generator[2079]: time="2024-02-12T19:17:13Z" level=info msg="torcx already run"
Feb 12 19:17:13.283613 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 19:17:13.283633 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 19:17:13.300627 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 19:17:13.366774 systemd[1]: Stopping kubelet.service...
Feb 12 19:17:13.387972 systemd[1]: kubelet.service: Deactivated successfully.
Feb 12 19:17:13.388334 systemd[1]: Stopped kubelet.service.
Feb 12 19:17:13.390157 systemd[1]: Started kubelet.service.
Feb 12 19:17:13.457713 kubelet[2123]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb 12 19:17:13.457713 kubelet[2123]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 19:17:13.458174 kubelet[2123]: I0212 19:17:13.457680    2123 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 12 19:17:13.459359 kubelet[2123]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb 12 19:17:13.459359 kubelet[2123]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 19:17:13.462147 kubelet[2123]: I0212 19:17:13.462119    2123 server.go:412] "Kubelet version" kubeletVersion="v1.26.5"
Feb 12 19:17:13.462147 kubelet[2123]: I0212 19:17:13.462141    2123 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 12 19:17:13.462352 kubelet[2123]: I0212 19:17:13.462323    2123 server.go:836] "Client rotation is on, will bootstrap in background"
Feb 12 19:17:13.463480 kubelet[2123]: I0212 19:17:13.463459    2123 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 12 19:17:13.465742 kubelet[2123]: W0212 19:17:13.465717    2123 machine.go:65] Cannot read vendor id correctly, set empty.
Feb 12 19:17:13.465887 kubelet[2123]: I0212 19:17:13.465867    2123 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 12 19:17:13.466380 kubelet[2123]: I0212 19:17:13.466360    2123 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 12 19:17:13.466760 kubelet[2123]: I0212 19:17:13.466749    2123 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 12 19:17:13.466820 kubelet[2123]: I0212 19:17:13.466809    2123 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
Feb 12 19:17:13.466898 kubelet[2123]: I0212 19:17:13.466828    2123 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Feb 12 19:17:13.466898 kubelet[2123]: I0212 19:17:13.466838    2123 container_manager_linux.go:308] "Creating device plugin manager"
Feb 12 19:17:13.466898 kubelet[2123]: I0212 19:17:13.466859    2123 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 19:17:13.472408 kubelet[2123]: I0212 19:17:13.472386    2123 kubelet.go:398] "Attempting to sync node with API server"
Feb 12 19:17:13.472408 kubelet[2123]: I0212 19:17:13.472411    2123 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 12 19:17:13.472513 kubelet[2123]: I0212 19:17:13.472434    2123 kubelet.go:297] "Adding apiserver pod source"
Feb 12 19:17:13.472513 kubelet[2123]: I0212 19:17:13.472445    2123 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 12 19:17:13.474945 kubelet[2123]: I0212 19:17:13.474909    2123 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb 12 19:17:13.475563 kubelet[2123]: I0212 19:17:13.475547    2123 server.go:1186] "Started kubelet"
Feb 12 19:17:13.477296 kubelet[2123]: I0212 19:17:13.477272    2123 server.go:161] "Starting to listen" address="0.0.0.0" port=10250
Feb 12 19:17:13.477896 kubelet[2123]: E0212 19:17:13.477703    2123 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb 12 19:17:13.477896 kubelet[2123]: E0212 19:17:13.477776    2123 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 12 19:17:13.478034 kubelet[2123]: I0212 19:17:13.477973    2123 server.go:451] "Adding debug handlers to kubelet server"
Feb 12 19:17:13.478272 kubelet[2123]: I0212 19:17:13.478258    2123 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 12 19:17:13.478446 kubelet[2123]: I0212 19:17:13.478433    2123 volume_manager.go:293] "Starting Kubelet Volume Manager"
Feb 12 19:17:13.478857 kubelet[2123]: I0212 19:17:13.478836    2123 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 12 19:17:13.479193 kubelet[2123]: E0212 19:17:13.479117    2123 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 12 19:17:13.522235 kubelet[2123]: I0212 19:17:13.522198    2123 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Feb 12 19:17:13.536191 kubelet[2123]: I0212 19:17:13.536171    2123 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Feb 12 19:17:13.536355 kubelet[2123]: I0212 19:17:13.536344    2123 status_manager.go:176] "Starting to sync pod status with apiserver"
Feb 12 19:17:13.536527 kubelet[2123]: I0212 19:17:13.536514    2123 kubelet.go:2113] "Starting kubelet main sync loop"
Feb 12 19:17:13.536692 kubelet[2123]: E0212 19:17:13.536680    2123 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 12 19:17:13.585670 kubelet[2123]: I0212 19:17:13.585643    2123 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 12 19:17:13.586359 kubelet[2123]: I0212 19:17:13.586340    2123 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 12 19:17:13.586974 kubelet[2123]: I0212 19:17:13.586672    2123 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 19:17:13.587466 kubelet[2123]: I0212 19:17:13.587452    2123 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 12 19:17:13.587604 kubelet[2123]: I0212 19:17:13.587586    2123 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Feb 12 19:17:13.587687 kubelet[2123]: I0212 19:17:13.587677    2123 policy_none.go:49] "None policy: Start"
Feb 12 19:17:13.588948 kubelet[2123]: I0212 19:17:13.588929    2123 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb 12 19:17:13.589009 kubelet[2123]: I0212 19:17:13.588956    2123 state_mem.go:35] "Initializing new in-memory state store"
Feb 12 19:17:13.589133 kubelet[2123]: I0212 19:17:13.589092    2123 state_mem.go:75] "Updated machine memory state"
Feb 12 19:17:13.590757 kubelet[2123]: I0212 19:17:13.590739    2123 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Feb 12 19:17:13.592374 kubelet[2123]: I0212 19:17:13.592242    2123 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 12 19:17:13.592677 kubelet[2123]: I0212 19:17:13.592444    2123 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 12 19:17:13.597027 kubelet[2123]: I0212 19:17:13.596987    2123 kubelet_node_status.go:108] "Node was previously registered" node="localhost"
Feb 12 19:17:13.597112 kubelet[2123]: I0212 19:17:13.597049    2123 kubelet_node_status.go:73] "Successfully registered node" node="localhost"
Feb 12 19:17:13.637304 kubelet[2123]: I0212 19:17:13.637250    2123 topology_manager.go:210] "Topology Admit Handler"
Feb 12 19:17:13.637482 kubelet[2123]: I0212 19:17:13.637369    2123 topology_manager.go:210] "Topology Admit Handler"
Feb 12 19:17:13.637482 kubelet[2123]: I0212 19:17:13.637425    2123 topology_manager.go:210] "Topology Admit Handler"
Feb 12 19:17:13.684993 kubelet[2123]: I0212 19:17:13.684941    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost"
Feb 12 19:17:13.684993 kubelet[2123]: I0212 19:17:13.684991    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18be35aab02d8fe0bebd95f4ebe2d6bb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"18be35aab02d8fe0bebd95f4ebe2d6bb\") " pod="kube-system/kube-apiserver-localhost"
Feb 12 19:17:13.685157 kubelet[2123]: I0212 19:17:13.685018    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:13.685157 kubelet[2123]: I0212 19:17:13.685039    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18be35aab02d8fe0bebd95f4ebe2d6bb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"18be35aab02d8fe0bebd95f4ebe2d6bb\") " pod="kube-system/kube-apiserver-localhost"
Feb 12 19:17:13.685157 kubelet[2123]: I0212 19:17:13.685077    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18be35aab02d8fe0bebd95f4ebe2d6bb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"18be35aab02d8fe0bebd95f4ebe2d6bb\") " pod="kube-system/kube-apiserver-localhost"
Feb 12 19:17:13.685157 kubelet[2123]: I0212 19:17:13.685098    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:13.685157 kubelet[2123]: I0212 19:17:13.685118    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:13.685295 kubelet[2123]: I0212 19:17:13.685138    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:13.685295 kubelet[2123]: I0212 19:17:13.685161    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:13.879008 kubelet[2123]: E0212 19:17:13.878885    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:13.945177 kubelet[2123]: E0212 19:17:13.945141    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:13.945308 kubelet[2123]: E0212 19:17:13.945235    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:14.475481 kubelet[2123]: I0212 19:17:14.475442    2123 apiserver.go:52] "Watching apiserver"
Feb 12 19:17:14.479078 kubelet[2123]: I0212 19:17:14.479053    2123 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 12 19:17:14.490262 kubelet[2123]: I0212 19:17:14.490221    2123 reconciler.go:41] "Reconciler: start to sync state"
Feb 12 19:17:14.544191 kubelet[2123]: E0212 19:17:14.544164    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:14.715749 sudo[1338]: pam_unix(sudo:session): session closed for user root
Feb 12 19:17:14.717496 sshd[1332]: pam_unix(sshd:session): session closed for user core
Feb 12 19:17:14.720010 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:53432.service: Deactivated successfully.
Feb 12 19:17:14.721118 systemd-logind[1208]: Session 5 logged out. Waiting for processes to exit.
Feb 12 19:17:14.721121 systemd[1]: session-5.scope: Deactivated successfully.
Feb 12 19:17:14.722029 systemd-logind[1208]: Removed session 5.
Feb 12 19:17:14.877878 kubelet[2123]: E0212 19:17:14.877745    2123 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost"
Feb 12 19:17:14.878089 kubelet[2123]: E0212 19:17:14.878071    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:15.077842 kubelet[2123]: E0212 19:17:15.077798    2123 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost"
Feb 12 19:17:15.078254 kubelet[2123]: E0212 19:17:15.078230    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:15.286694 kubelet[2123]: I0212 19:17:15.286577    2123 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.286520751 pod.CreationTimestamp="2024-02-12 19:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:15.286426252 +0000 UTC m=+1.892007228" watchObservedRunningTime="2024-02-12 19:17:15.286520751 +0000 UTC m=+1.892101687"
Feb 12 19:17:15.545624 kubelet[2123]: E0212 19:17:15.545503    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:15.546072 kubelet[2123]: E0212 19:17:15.546047    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:15.546406 kubelet[2123]: E0212 19:17:15.546379    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:16.085900 kubelet[2123]: I0212 19:17:16.085863    2123 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.08582594 pod.CreationTimestamp="2024-02-12 19:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:15.679067485 +0000 UTC m=+2.284648421" watchObservedRunningTime="2024-02-12 19:17:16.08582594 +0000 UTC m=+2.691406916"
Feb 12 19:17:16.086420 kubelet[2123]: I0212 19:17:16.086395    2123 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.086366274 pod.CreationTimestamp="2024-02-12 19:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:16.082425226 +0000 UTC m=+2.688006202" watchObservedRunningTime="2024-02-12 19:17:16.086366274 +0000 UTC m=+2.691947250"
Feb 12 19:17:17.705948 kubelet[2123]: E0212 19:17:17.705916    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:17.930780 kubelet[2123]: E0212 19:17:17.930752    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:19.791558 kubelet[2123]: E0212 19:17:19.791512    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:20.553020 kubelet[2123]: E0212 19:17:20.552970    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:27.595998 kubelet[2123]: I0212 19:17:27.595954    2123 topology_manager.go:210] "Topology Admit Handler"
Feb 12 19:17:27.610365 kubelet[2123]: I0212 19:17:27.610333    2123 topology_manager.go:210] "Topology Admit Handler"
Feb 12 19:17:27.693011 kubelet[2123]: I0212 19:17:27.692965    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce1e0e07-d66a-4b6d-88ba-b7fe8a34938f-xtables-lock\") pod \"kube-proxy-5s2kj\" (UID: \"ce1e0e07-d66a-4b6d-88ba-b7fe8a34938f\") " pod="kube-system/kube-proxy-5s2kj"
Feb 12 19:17:27.693011 kubelet[2123]: I0212 19:17:27.693017    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrqnc\" (UniqueName: \"kubernetes.io/projected/ce1e0e07-d66a-4b6d-88ba-b7fe8a34938f-kube-api-access-zrqnc\") pod \"kube-proxy-5s2kj\" (UID: \"ce1e0e07-d66a-4b6d-88ba-b7fe8a34938f\") " pod="kube-system/kube-proxy-5s2kj"
Feb 12 19:17:27.693164 kubelet[2123]: I0212 19:17:27.693040    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/9a45d29f-23eb-415a-8f02-5bdaced3f182-cni-plugin\") pod \"kube-flannel-ds-jwzs8\" (UID: \"9a45d29f-23eb-415a-8f02-5bdaced3f182\") " pod="kube-flannel/kube-flannel-ds-jwzs8"
Feb 12 19:17:27.693164 kubelet[2123]: I0212 19:17:27.693062    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce1e0e07-d66a-4b6d-88ba-b7fe8a34938f-lib-modules\") pod \"kube-proxy-5s2kj\" (UID: \"ce1e0e07-d66a-4b6d-88ba-b7fe8a34938f\") " pod="kube-system/kube-proxy-5s2kj"
Feb 12 19:17:27.693164 kubelet[2123]: I0212 19:17:27.693083    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/9a45d29f-23eb-415a-8f02-5bdaced3f182-flannel-cfg\") pod \"kube-flannel-ds-jwzs8\" (UID: \"9a45d29f-23eb-415a-8f02-5bdaced3f182\") " pod="kube-flannel/kube-flannel-ds-jwzs8"
Feb 12 19:17:27.693164 kubelet[2123]: I0212 19:17:27.693104    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glwqz\" (UniqueName: \"kubernetes.io/projected/9a45d29f-23eb-415a-8f02-5bdaced3f182-kube-api-access-glwqz\") pod \"kube-flannel-ds-jwzs8\" (UID: \"9a45d29f-23eb-415a-8f02-5bdaced3f182\") " pod="kube-flannel/kube-flannel-ds-jwzs8"
Feb 12 19:17:27.693164 kubelet[2123]: I0212 19:17:27.693126    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce1e0e07-d66a-4b6d-88ba-b7fe8a34938f-kube-proxy\") pod \"kube-proxy-5s2kj\" (UID: \"ce1e0e07-d66a-4b6d-88ba-b7fe8a34938f\") " pod="kube-system/kube-proxy-5s2kj"
Feb 12 19:17:27.693302 kubelet[2123]: I0212 19:17:27.693145    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9a45d29f-23eb-415a-8f02-5bdaced3f182-run\") pod \"kube-flannel-ds-jwzs8\" (UID: \"9a45d29f-23eb-415a-8f02-5bdaced3f182\") " pod="kube-flannel/kube-flannel-ds-jwzs8"
Feb 12 19:17:27.693302 kubelet[2123]: I0212 19:17:27.693167    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a45d29f-23eb-415a-8f02-5bdaced3f182-xtables-lock\") pod \"kube-flannel-ds-jwzs8\" (UID: \"9a45d29f-23eb-415a-8f02-5bdaced3f182\") " pod="kube-flannel/kube-flannel-ds-jwzs8"
Feb 12 19:17:27.693302 kubelet[2123]: I0212 19:17:27.693188    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/9a45d29f-23eb-415a-8f02-5bdaced3f182-cni\") pod \"kube-flannel-ds-jwzs8\" (UID: \"9a45d29f-23eb-415a-8f02-5bdaced3f182\") " pod="kube-flannel/kube-flannel-ds-jwzs8"
Feb 12 19:17:27.698958 kubelet[2123]: I0212 19:17:27.698922    2123 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb 12 19:17:27.699279 env[1216]: time="2024-02-12T19:17:27.699235467Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 12 19:17:27.699555 kubelet[2123]: I0212 19:17:27.699374    2123 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb 12 19:17:27.714136 kubelet[2123]: E0212 19:17:27.714105    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:27.938293 kubelet[2123]: E0212 19:17:27.938261    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:28.200483 kubelet[2123]: E0212 19:17:28.200368    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:28.201048 env[1216]: time="2024-02-12T19:17:28.201009753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5s2kj,Uid:ce1e0e07-d66a-4b6d-88ba-b7fe8a34938f,Namespace:kube-system,Attempt:0,}"
Feb 12 19:17:28.214432 kubelet[2123]: E0212 19:17:28.214408    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:28.216888 env[1216]: time="2024-02-12T19:17:28.215079591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jwzs8,Uid:9a45d29f-23eb-415a-8f02-5bdaced3f182,Namespace:kube-flannel,Attempt:0,}"
Feb 12 19:17:28.218875 env[1216]: time="2024-02-12T19:17:28.218816104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 19:17:28.218875 env[1216]: time="2024-02-12T19:17:28.218861507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 19:17:28.218875 env[1216]: time="2024-02-12T19:17:28.218872427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 19:17:28.219047 env[1216]: time="2024-02-12T19:17:28.219003515Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c489d48aee89345bccccf6515f3cbc43737707ba7f4c09819fcee845c1a7ca3 pid=2219 runtime=io.containerd.runc.v2
Feb 12 19:17:28.231768 env[1216]: time="2024-02-12T19:17:28.231692267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 19:17:28.231888 env[1216]: time="2024-02-12T19:17:28.231780992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 19:17:28.231888 env[1216]: time="2024-02-12T19:17:28.231807634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 19:17:28.232067 env[1216]: time="2024-02-12T19:17:28.232033928Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/245f80fc19f35c3b26789589115c046500ee406bbd5841b193f60b9c90cd7b06 pid=2246 runtime=io.containerd.runc.v2
Feb 12 19:17:28.263242 env[1216]: time="2024-02-12T19:17:28.263199551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5s2kj,Uid:ce1e0e07-d66a-4b6d-88ba-b7fe8a34938f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c489d48aee89345bccccf6515f3cbc43737707ba7f4c09819fcee845c1a7ca3\""
Feb 12 19:17:28.263823 kubelet[2123]: E0212 19:17:28.263804    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:28.266159 env[1216]: time="2024-02-12T19:17:28.266119773Z" level=info msg="CreateContainer within sandbox \"4c489d48aee89345bccccf6515f3cbc43737707ba7f4c09819fcee845c1a7ca3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 12 19:17:28.282102 env[1216]: time="2024-02-12T19:17:28.282045766Z" level=info msg="CreateContainer within sandbox \"4c489d48aee89345bccccf6515f3cbc43737707ba7f4c09819fcee845c1a7ca3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c43983c26ce2d7f24ec98eeb2dd11a19a2da3bd5bf540644a7d3b3b34f55f075\""
Feb 12 19:17:28.283940 env[1216]: time="2024-02-12T19:17:28.283896922Z" level=info msg="StartContainer for \"c43983c26ce2d7f24ec98eeb2dd11a19a2da3bd5bf540644a7d3b3b34f55f075\""
Feb 12 19:17:28.287117 env[1216]: time="2024-02-12T19:17:28.287085121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jwzs8,Uid:9a45d29f-23eb-415a-8f02-5bdaced3f182,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"245f80fc19f35c3b26789589115c046500ee406bbd5841b193f60b9c90cd7b06\""
Feb 12 19:17:28.287773 kubelet[2123]: E0212 19:17:28.287755    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:28.289076 env[1216]: time="2024-02-12T19:17:28.289033362Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\""
Feb 12 19:17:28.365696 env[1216]: time="2024-02-12T19:17:28.365638699Z" level=info msg="StartContainer for \"c43983c26ce2d7f24ec98eeb2dd11a19a2da3bd5bf540644a7d3b3b34f55f075\" returns successfully"
Feb 12 19:17:28.565518 kubelet[2123]: E0212 19:17:28.565408    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:28.567096 kubelet[2123]: E0212 19:17:28.567036    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:29.364935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3540737603.mount: Deactivated successfully.
Feb 12 19:17:29.409028 env[1216]: time="2024-02-12T19:17:29.408978127Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:29.411087 env[1216]: time="2024-02-12T19:17:29.411053770Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:29.412586 env[1216]: time="2024-02-12T19:17:29.412551339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:29.414120 env[1216]: time="2024-02-12T19:17:29.414078869Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:29.414798 env[1216]: time="2024-02-12T19:17:29.414768750Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\" returns image reference \"sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9\""
Feb 12 19:17:29.417831 env[1216]: time="2024-02-12T19:17:29.417763767Z" level=info msg="CreateContainer within sandbox \"245f80fc19f35c3b26789589115c046500ee406bbd5841b193f60b9c90cd7b06\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}"
Feb 12 19:17:29.426962 env[1216]: time="2024-02-12T19:17:29.426908109Z" level=info msg="CreateContainer within sandbox \"245f80fc19f35c3b26789589115c046500ee406bbd5841b193f60b9c90cd7b06\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"575f9527fb30dc7a148125bf973d36e2e680150402e61eb9205b4fd3609cff14\""
Feb 12 19:17:29.427466 env[1216]: time="2024-02-12T19:17:29.427430060Z" level=info msg="StartContainer for \"575f9527fb30dc7a148125bf973d36e2e680150402e61eb9205b4fd3609cff14\""
Feb 12 19:17:29.496647 env[1216]: time="2024-02-12T19:17:29.493971403Z" level=info msg="StartContainer for \"575f9527fb30dc7a148125bf973d36e2e680150402e61eb9205b4fd3609cff14\" returns successfully"
Feb 12 19:17:29.569628 kubelet[2123]: E0212 19:17:29.569546    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:29.594957 kubelet[2123]: E0212 19:17:29.569658    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:29.602717 kubelet[2123]: I0212 19:17:29.602676    2123 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5s2kj" podStartSLOduration=2.602637721 pod.CreationTimestamp="2024-02-12 19:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:29.602534115 +0000 UTC m=+16.208115091" watchObservedRunningTime="2024-02-12 19:17:29.602637721 +0000 UTC m=+16.208218657"
Feb 12 19:17:29.793272 env[1216]: time="2024-02-12T19:17:29.793217492Z" level=info msg="shim disconnected" id=575f9527fb30dc7a148125bf973d36e2e680150402e61eb9205b4fd3609cff14
Feb 12 19:17:29.793272 env[1216]: time="2024-02-12T19:17:29.793266855Z" level=warning msg="cleaning up after shim disconnected" id=575f9527fb30dc7a148125bf973d36e2e680150402e61eb9205b4fd3609cff14 namespace=k8s.io
Feb 12 19:17:29.793272 env[1216]: time="2024-02-12T19:17:29.793276336Z" level=info msg="cleaning up dead shim"
Feb 12 19:17:29.801127 env[1216]: time="2024-02-12T19:17:29.801081598Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:17:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2476 runtime=io.containerd.runc.v2\n"
Feb 12 19:17:29.979654 update_engine[1209]: I0212 19:17:29.979483  1209 update_attempter.cc:509] Updating boot flags...
Feb 12 19:17:30.571840 kubelet[2123]: E0212 19:17:30.571780    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:30.574503 env[1216]: time="2024-02-12T19:17:30.573824191Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\""
Feb 12 19:17:31.621148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408963249.mount: Deactivated successfully.
Feb 12 19:17:32.242999 env[1216]: time="2024-02-12T19:17:32.242944448Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:32.244575 env[1216]: time="2024-02-12T19:17:32.244541489Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:32.246777 env[1216]: time="2024-02-12T19:17:32.246733961Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:32.248401 env[1216]: time="2024-02-12T19:17:32.248371685Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 19:17:32.249400 env[1216]: time="2024-02-12T19:17:32.249365335Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\" returns image reference \"sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459\""
Feb 12 19:17:32.252456 env[1216]: time="2024-02-12T19:17:32.252422531Z" level=info msg="CreateContainer within sandbox \"245f80fc19f35c3b26789589115c046500ee406bbd5841b193f60b9c90cd7b06\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Feb 12 19:17:32.264259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2710020562.mount: Deactivated successfully.
Feb 12 19:17:32.269491 env[1216]: time="2024-02-12T19:17:32.269433960Z" level=info msg="CreateContainer within sandbox \"245f80fc19f35c3b26789589115c046500ee406bbd5841b193f60b9c90cd7b06\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"48f4812f42487880628796d36b3db2434fd27e123dbd759ecde7691ffbef583a\""
Feb 12 19:17:32.270265 env[1216]: time="2024-02-12T19:17:32.270235041Z" level=info msg="StartContainer for \"48f4812f42487880628796d36b3db2434fd27e123dbd759ecde7691ffbef583a\""
Feb 12 19:17:32.321087 env[1216]: time="2024-02-12T19:17:32.321030874Z" level=info msg="StartContainer for \"48f4812f42487880628796d36b3db2434fd27e123dbd759ecde7691ffbef583a\" returns successfully"
Feb 12 19:17:32.402228 kubelet[2123]: I0212 19:17:32.402199    2123 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Feb 12 19:17:32.421261 env[1216]: time="2024-02-12T19:17:32.421201427Z" level=info msg="shim disconnected" id=48f4812f42487880628796d36b3db2434fd27e123dbd759ecde7691ffbef583a
Feb 12 19:17:32.421261 env[1216]: time="2024-02-12T19:17:32.421245829Z" level=warning msg="cleaning up after shim disconnected" id=48f4812f42487880628796d36b3db2434fd27e123dbd759ecde7691ffbef583a namespace=k8s.io
Feb 12 19:17:32.421261 env[1216]: time="2024-02-12T19:17:32.421255549Z" level=info msg="cleaning up dead shim"
Feb 12 19:17:32.429531 kubelet[2123]: I0212 19:17:32.429403    2123 topology_manager.go:210] "Topology Admit Handler"
Feb 12 19:17:32.433919 kubelet[2123]: I0212 19:17:32.431046    2123 topology_manager.go:210] "Topology Admit Handler"
Feb 12 19:17:32.439816 env[1216]: time="2024-02-12T19:17:32.439078979Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:17:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2544 runtime=io.containerd.runc.v2\n"
Feb 12 19:17:32.531643 kubelet[2123]: I0212 19:17:32.531022    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjtv\" (UniqueName: \"kubernetes.io/projected/649ca98b-b832-4aee-a744-c2260946320f-kube-api-access-rzjtv\") pod \"coredns-787d4945fb-5xz8h\" (UID: \"649ca98b-b832-4aee-a744-c2260946320f\") " pod="kube-system/coredns-787d4945fb-5xz8h"
Feb 12 19:17:32.531643 kubelet[2123]: I0212 19:17:32.531069    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/348e0f89-bd30-493c-b298-a326041a8739-config-volume\") pod \"coredns-787d4945fb-6pz4n\" (UID: \"348e0f89-bd30-493c-b298-a326041a8739\") " pod="kube-system/coredns-787d4945fb-6pz4n"
Feb 12 19:17:32.531643 kubelet[2123]: I0212 19:17:32.531094    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqs99\" (UniqueName: \"kubernetes.io/projected/348e0f89-bd30-493c-b298-a326041a8739-kube-api-access-zqs99\") pod \"coredns-787d4945fb-6pz4n\" (UID: \"348e0f89-bd30-493c-b298-a326041a8739\") " pod="kube-system/coredns-787d4945fb-6pz4n"
Feb 12 19:17:32.531643 kubelet[2123]: I0212 19:17:32.531116    2123 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/649ca98b-b832-4aee-a744-c2260946320f-config-volume\") pod \"coredns-787d4945fb-5xz8h\" (UID: \"649ca98b-b832-4aee-a744-c2260946320f\") " pod="kube-system/coredns-787d4945fb-5xz8h"
Feb 12 19:17:32.576358 kubelet[2123]: E0212 19:17:32.576329    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:32.583605 env[1216]: time="2024-02-12T19:17:32.578945598Z" level=info msg="CreateContainer within sandbox \"245f80fc19f35c3b26789589115c046500ee406bbd5841b193f60b9c90cd7b06\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}"
Feb 12 19:17:32.600188 env[1216]: time="2024-02-12T19:17:32.600144881Z" level=info msg="CreateContainer within sandbox \"245f80fc19f35c3b26789589115c046500ee406bbd5841b193f60b9c90cd7b06\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"016aecbec7b1075b4d41a2cf2772b1436996dce891a0956fa965a90a73e1f753\""
Feb 12 19:17:32.600586 env[1216]: time="2024-02-12T19:17:32.600556822Z" level=info msg="StartContainer for \"016aecbec7b1075b4d41a2cf2772b1436996dce891a0956fa965a90a73e1f753\""
Feb 12 19:17:32.657686 env[1216]: time="2024-02-12T19:17:32.657624254Z" level=info msg="StartContainer for \"016aecbec7b1075b4d41a2cf2772b1436996dce891a0956fa965a90a73e1f753\" returns successfully"
Feb 12 19:17:32.735693 kubelet[2123]: E0212 19:17:32.735661    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:32.736173 kubelet[2123]: E0212 19:17:32.736150    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:32.737714 env[1216]: time="2024-02-12T19:17:32.737053549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5xz8h,Uid:649ca98b-b832-4aee-a744-c2260946320f,Namespace:kube-system,Attempt:0,}"
Feb 12 19:17:32.737714 env[1216]: time="2024-02-12T19:17:32.737459170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6pz4n,Uid:348e0f89-bd30-493c-b298-a326041a8739,Namespace:kube-system,Attempt:0,}"
Feb 12 19:17:32.814867 env[1216]: time="2024-02-12T19:17:32.813237358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6pz4n,Uid:348e0f89-bd30-493c-b298-a326041a8739,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c9693dc631ee8dad926636ad4cb5ae32456df4cf6c39edc79f33eb4fc20eb315\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory"
Feb 12 19:17:32.814867 env[1216]: time="2024-02-12T19:17:32.813952914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5xz8h,Uid:649ca98b-b832-4aee-a744-c2260946320f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6f4c6974099b907b2331cb7997e90b13c320a085fe1851bb31d53e84f3400dc\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory"
Feb 12 19:17:32.815031 kubelet[2123]: E0212 19:17:32.814731    2123 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f4c6974099b907b2331cb7997e90b13c320a085fe1851bb31d53e84f3400dc\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory"
Feb 12 19:17:32.815031 kubelet[2123]: E0212 19:17:32.814770    2123 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9693dc631ee8dad926636ad4cb5ae32456df4cf6c39edc79f33eb4fc20eb315\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory"
Feb 12 19:17:32.815031 kubelet[2123]: E0212 19:17:32.814805    2123 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f4c6974099b907b2331cb7997e90b13c320a085fe1851bb31d53e84f3400dc\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-5xz8h"
Feb 12 19:17:32.815031 kubelet[2123]: E0212 19:17:32.814824    2123 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9693dc631ee8dad926636ad4cb5ae32456df4cf6c39edc79f33eb4fc20eb315\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-6pz4n"
Feb 12 19:17:32.815031 kubelet[2123]: E0212 19:17:32.814850    2123 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9693dc631ee8dad926636ad4cb5ae32456df4cf6c39edc79f33eb4fc20eb315\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-6pz4n"
Feb 12 19:17:32.815161 kubelet[2123]: E0212 19:17:32.814913    2123 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-6pz4n_kube-system(348e0f89-bd30-493c-b298-a326041a8739)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-6pz4n_kube-system(348e0f89-bd30-493c-b298-a326041a8739)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9693dc631ee8dad926636ad4cb5ae32456df4cf6c39edc79f33eb4fc20eb315\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-6pz4n" podUID=348e0f89-bd30-493c-b298-a326041a8739
Feb 12 19:17:32.815161 kubelet[2123]: E0212 19:17:32.814827    2123 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f4c6974099b907b2331cb7997e90b13c320a085fe1851bb31d53e84f3400dc\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-5xz8h"
Feb 12 19:17:32.815234 kubelet[2123]: E0212 19:17:32.815162    2123 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-5xz8h_kube-system(649ca98b-b832-4aee-a744-c2260946320f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-5xz8h_kube-system(649ca98b-b832-4aee-a744-c2260946320f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6f4c6974099b907b2331cb7997e90b13c320a085fe1851bb31d53e84f3400dc\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-5xz8h" podUID=649ca98b-b832-4aee-a744-c2260946320f
Feb 12 19:17:33.579623 kubelet[2123]: E0212 19:17:33.579576    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:33.588485 kubelet[2123]: I0212 19:17:33.588452    2123 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-jwzs8" podStartSLOduration=-9.22337203026636e+09 pod.CreationTimestamp="2024-02-12 19:17:27 +0000 UTC" firstStartedPulling="2024-02-12 19:17:28.288539171 +0000 UTC m=+14.894120107" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:33.588001334 +0000 UTC m=+20.193582310" watchObservedRunningTime="2024-02-12 19:17:33.588416115 +0000 UTC m=+20.193997091"
Feb 12 19:17:34.114518 systemd-networkd[1104]: flannel.1: Link UP
Feb 12 19:17:34.114525 systemd-networkd[1104]: flannel.1: Gained carrier
Feb 12 19:17:34.580879 kubelet[2123]: E0212 19:17:34.580801    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:35.294752 systemd-networkd[1104]: flannel.1: Gained IPv6LL
Feb 12 19:17:44.537825 kubelet[2123]: E0212 19:17:44.537794    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:44.538240 kubelet[2123]: E0212 19:17:44.537883    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:44.538439 env[1216]: time="2024-02-12T19:17:44.538375916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5xz8h,Uid:649ca98b-b832-4aee-a744-c2260946320f,Namespace:kube-system,Attempt:0,}"
Feb 12 19:17:44.538917 env[1216]: time="2024-02-12T19:17:44.538874091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6pz4n,Uid:348e0f89-bd30-493c-b298-a326041a8739,Namespace:kube-system,Attempt:0,}"
Feb 12 19:17:44.573367 systemd-networkd[1104]: cni0: Link UP
Feb 12 19:17:44.573373 systemd-networkd[1104]: cni0: Gained carrier
Feb 12 19:17:44.574977 systemd-networkd[1104]: cni0: Lost carrier
Feb 12 19:17:44.580210 systemd-networkd[1104]: vethe311e78d: Link UP
Feb 12 19:17:44.585673 kernel: cni0: port 1(vethe311e78d) entered blocking state
Feb 12 19:17:44.585768 kernel: cni0: port 1(vethe311e78d) entered disabled state
Feb 12 19:17:44.587632 kernel: device vethe311e78d entered promiscuous mode
Feb 12 19:17:44.588943 kernel: cni0: port 1(vethe311e78d) entered blocking state
Feb 12 19:17:44.589011 kernel: cni0: port 1(vethe311e78d) entered forwarding state
Feb 12 19:17:44.592671 kernel: cni0: port 1(vethe311e78d) entered disabled state
Feb 12 19:17:44.592466 systemd-networkd[1104]: veth33706be1: Link UP
Feb 12 19:17:44.594740 kernel: cni0: port 2(veth33706be1) entered blocking state
Feb 12 19:17:44.594820 kernel: cni0: port 2(veth33706be1) entered disabled state
Feb 12 19:17:44.594849 kernel: device veth33706be1 entered promiscuous mode
Feb 12 19:17:44.595647 kernel: cni0: port 2(veth33706be1) entered blocking state
Feb 12 19:17:44.595693 kernel: cni0: port 2(veth33706be1) entered forwarding state
Feb 12 19:17:44.596810 kernel: cni0: port 2(veth33706be1) entered disabled state
Feb 12 19:17:44.605673 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe311e78d: link becomes ready
Feb 12 19:17:44.605779 kernel: cni0: port 1(vethe311e78d) entered blocking state
Feb 12 19:17:44.605799 kernel: cni0: port 1(vethe311e78d) entered forwarding state
Feb 12 19:17:44.605746 systemd-networkd[1104]: vethe311e78d: Gained carrier
Feb 12 19:17:44.605964 systemd-networkd[1104]: cni0: Gained carrier
Feb 12 19:17:44.607813 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth33706be1: link becomes ready
Feb 12 19:17:44.607895 kernel: cni0: port 2(veth33706be1) entered blocking state
Feb 12 19:17:44.607925 kernel: cni0: port 2(veth33706be1) entered forwarding state
Feb 12 19:17:44.607919 systemd-networkd[1104]: veth33706be1: Gained carrier
Feb 12 19:17:44.610168 env[1216]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000ac8e8), "name":"cbr0", "type":"bridge"}
Feb 12 19:17:44.610843 env[1216]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}
Feb 12 19:17:44.610843 env[1216]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000ac8e8), "name":"cbr0", "type":"bridge"}
Feb 12 19:17:44.621049 env[1216]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-12T19:17:44.620977742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 19:17:44.621321 env[1216]: time="2024-02-12T19:17:44.621025104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 19:17:44.621321 env[1216]: time="2024-02-12T19:17:44.621232710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 19:17:44.621625 env[1216]: time="2024-02-12T19:17:44.621563600Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c81bc11698b24f24dcffb6330bae12bc56dd0539656ff7a17f15a7c656046624 pid=2825 runtime=io.containerd.runc.v2
Feb 12 19:17:44.627209 env[1216]: time="2024-02-12T19:17:44.627140609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 19:17:44.627209 env[1216]: time="2024-02-12T19:17:44.627180731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 19:17:44.627209 env[1216]: time="2024-02-12T19:17:44.627192251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 19:17:44.627346 env[1216]: time="2024-02-12T19:17:44.627313175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52754989d97484241f2bbd4792b5454676684fabf334277e354bef013b8d3912 pid=2842 runtime=io.containerd.runc.v2
Feb 12 19:17:44.665617 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 12 19:17:44.671948 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 12 19:17:44.686164 env[1216]: time="2024-02-12T19:17:44.686111759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6pz4n,Uid:348e0f89-bd30-493c-b298-a326041a8739,Namespace:kube-system,Attempt:0,} returns sandbox id \"c81bc11698b24f24dcffb6330bae12bc56dd0539656ff7a17f15a7c656046624\""
Feb 12 19:17:44.686768 kubelet[2123]: E0212 19:17:44.686752    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:44.688782 env[1216]: time="2024-02-12T19:17:44.688743919Z" level=info msg="CreateContainer within sandbox \"c81bc11698b24f24dcffb6330bae12bc56dd0539656ff7a17f15a7c656046624\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 12 19:17:44.698342 env[1216]: time="2024-02-12T19:17:44.698301929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-5xz8h,Uid:649ca98b-b832-4aee-a744-c2260946320f,Namespace:kube-system,Attempt:0,} returns sandbox id \"52754989d97484241f2bbd4792b5454676684fabf334277e354bef013b8d3912\""
Feb 12 19:17:44.700665 kubelet[2123]: E0212 19:17:44.699227    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:44.701099 env[1216]: time="2024-02-12T19:17:44.701042892Z" level=info msg="CreateContainer within sandbox \"c81bc11698b24f24dcffb6330bae12bc56dd0539656ff7a17f15a7c656046624\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4662758383312875b1521fe7ff1df5fc879bdd889d08d3ff5d0b3535fc79084\""
Feb 12 19:17:44.701479 env[1216]: time="2024-02-12T19:17:44.701398943Z" level=info msg="StartContainer for \"a4662758383312875b1521fe7ff1df5fc879bdd889d08d3ff5d0b3535fc79084\""
Feb 12 19:17:44.702753 env[1216]: time="2024-02-12T19:17:44.702719983Z" level=info msg="CreateContainer within sandbox \"52754989d97484241f2bbd4792b5454676684fabf334277e354bef013b8d3912\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 12 19:17:44.715343 env[1216]: time="2024-02-12T19:17:44.715277404Z" level=info msg="CreateContainer within sandbox \"52754989d97484241f2bbd4792b5454676684fabf334277e354bef013b8d3912\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"51c35062addcd40f77f6f55043e93d4ff2cf6a7bbf4325809fd4701d8f084c63\""
Feb 12 19:17:44.716180 env[1216]: time="2024-02-12T19:17:44.716115629Z" level=info msg="StartContainer for \"51c35062addcd40f77f6f55043e93d4ff2cf6a7bbf4325809fd4701d8f084c63\""
Feb 12 19:17:44.766765 env[1216]: time="2024-02-12T19:17:44.766712524Z" level=info msg="StartContainer for \"a4662758383312875b1521fe7ff1df5fc879bdd889d08d3ff5d0b3535fc79084\" returns successfully"
Feb 12 19:17:44.801954 env[1216]: time="2024-02-12T19:17:44.801843510Z" level=info msg="StartContainer for \"51c35062addcd40f77f6f55043e93d4ff2cf6a7bbf4325809fd4701d8f084c63\" returns successfully"
Feb 12 19:17:45.604373 kubelet[2123]: E0212 19:17:45.604338    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:45.607137 kubelet[2123]: E0212 19:17:45.607114    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:45.614973 kubelet[2123]: I0212 19:17:45.614935    2123 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-5xz8h" podStartSLOduration=18.614903022 pod.CreationTimestamp="2024-02-12 19:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:45.614705496 +0000 UTC m=+32.220286472" watchObservedRunningTime="2024-02-12 19:17:45.614903022 +0000 UTC m=+32.220483998"
Feb 12 19:17:45.643539 kubelet[2123]: I0212 19:17:45.643485    2123 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-6pz4n" podStartSLOduration=18.643448257 pod.CreationTimestamp="2024-02-12 19:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:45.634199106 +0000 UTC m=+32.239780082" watchObservedRunningTime="2024-02-12 19:17:45.643448257 +0000 UTC m=+32.249029233"
Feb 12 19:17:45.982708 systemd-networkd[1104]: veth33706be1: Gained IPv6LL
Feb 12 19:17:46.110704 systemd-networkd[1104]: cni0: Gained IPv6LL
Feb 12 19:17:46.558728 systemd-networkd[1104]: vethe311e78d: Gained IPv6LL
Feb 12 19:17:46.608871 kubelet[2123]: E0212 19:17:46.608845    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:46.609338 kubelet[2123]: E0212 19:17:46.609326    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:47.610807 kubelet[2123]: E0212 19:17:47.610764    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:47.611320 kubelet[2123]: E0212 19:17:47.611303    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:17:52.276019 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:34540.service.
Feb 12 19:17:52.328393 sshd[3069]: Accepted publickey for core from 10.0.0.1 port 34540 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:17:52.330132 sshd[3069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:17:52.334453 systemd-logind[1208]: New session 6 of user core.
Feb 12 19:17:52.334934 systemd[1]: Started session-6.scope.
Feb 12 19:17:52.480019 sshd[3069]: pam_unix(sshd:session): session closed for user core
Feb 12 19:17:52.483318 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:34540.service: Deactivated successfully.
Feb 12 19:17:52.484368 systemd[1]: session-6.scope: Deactivated successfully.
Feb 12 19:17:52.484796 systemd-logind[1208]: Session 6 logged out. Waiting for processes to exit.
Feb 12 19:17:52.485439 systemd-logind[1208]: Removed session 6.
Feb 12 19:17:57.482210 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:47950.service.
Feb 12 19:17:57.527308 sshd[3103]: Accepted publickey for core from 10.0.0.1 port 47950 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:17:57.528801 sshd[3103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:17:57.534231 systemd-logind[1208]: New session 7 of user core.
Feb 12 19:17:57.535152 systemd[1]: Started session-7.scope.
Feb 12 19:17:57.655130 sshd[3103]: pam_unix(sshd:session): session closed for user core
Feb 12 19:17:57.657800 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:47950.service: Deactivated successfully.
Feb 12 19:17:57.659058 systemd[1]: session-7.scope: Deactivated successfully.
Feb 12 19:17:57.659060 systemd-logind[1208]: Session 7 logged out. Waiting for processes to exit.
Feb 12 19:17:57.659898 systemd-logind[1208]: Removed session 7.
Feb 12 19:18:02.658583 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:51292.service.
Feb 12 19:18:02.704304 sshd[3138]: Accepted publickey for core from 10.0.0.1 port 51292 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:02.705467 sshd[3138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:02.709824 systemd-logind[1208]: New session 8 of user core.
Feb 12 19:18:02.709875 systemd[1]: Started session-8.scope.
Feb 12 19:18:02.823943 sshd[3138]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:02.826414 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:51306.service.
Feb 12 19:18:02.826934 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:51292.service: Deactivated successfully.
Feb 12 19:18:02.827998 systemd-logind[1208]: Session 8 logged out. Waiting for processes to exit.
Feb 12 19:18:02.828042 systemd[1]: session-8.scope: Deactivated successfully.
Feb 12 19:18:02.829053 systemd-logind[1208]: Removed session 8.
Feb 12 19:18:02.874293 sshd[3151]: Accepted publickey for core from 10.0.0.1 port 51306 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:02.875964 sshd[3151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:02.879357 systemd-logind[1208]: New session 9 of user core.
Feb 12 19:18:02.880264 systemd[1]: Started session-9.scope.
Feb 12 19:18:03.078901 sshd[3151]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:03.079530 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:51312.service.
Feb 12 19:18:03.084566 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:51306.service: Deactivated successfully.
Feb 12 19:18:03.085591 systemd-logind[1208]: Session 9 logged out. Waiting for processes to exit.
Feb 12 19:18:03.085671 systemd[1]: session-9.scope: Deactivated successfully.
Feb 12 19:18:03.086316 systemd-logind[1208]: Removed session 9.
Feb 12 19:18:03.132500 sshd[3164]: Accepted publickey for core from 10.0.0.1 port 51312 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:03.131866 sshd[3164]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:03.140075 systemd[1]: Started session-10.scope.
Feb 12 19:18:03.140562 systemd-logind[1208]: New session 10 of user core.
Feb 12 19:18:03.263903 sshd[3164]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:03.266344 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:51312.service: Deactivated successfully.
Feb 12 19:18:03.269144 systemd-logind[1208]: Session 10 logged out. Waiting for processes to exit.
Feb 12 19:18:03.269192 systemd[1]: session-10.scope: Deactivated successfully.
Feb 12 19:18:03.269997 systemd-logind[1208]: Removed session 10.
Feb 12 19:18:08.266760 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:51324.service.
Feb 12 19:18:08.313797 sshd[3198]: Accepted publickey for core from 10.0.0.1 port 51324 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:08.311999 sshd[3198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:08.317466 systemd-logind[1208]: New session 11 of user core.
Feb 12 19:18:08.318357 systemd[1]: Started session-11.scope.
Feb 12 19:18:08.431288 sshd[3198]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:08.433758 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:51324.service: Deactivated successfully.
Feb 12 19:18:08.434898 systemd[1]: session-11.scope: Deactivated successfully.
Feb 12 19:18:08.435273 systemd-logind[1208]: Session 11 logged out. Waiting for processes to exit.
Feb 12 19:18:08.439135 systemd-logind[1208]: Removed session 11.
Feb 12 19:18:13.434705 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:37400.service.
Feb 12 19:18:13.482526 sshd[3230]: Accepted publickey for core from 10.0.0.1 port 37400 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:13.485098 sshd[3230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:13.489272 systemd-logind[1208]: New session 12 of user core.
Feb 12 19:18:13.489729 systemd[1]: Started session-12.scope.
Feb 12 19:18:13.609510 sshd[3230]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:13.612734 systemd-logind[1208]: Session 12 logged out. Waiting for processes to exit.
Feb 12 19:18:13.612897 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:37400.service: Deactivated successfully.
Feb 12 19:18:13.613847 systemd[1]: session-12.scope: Deactivated successfully.
Feb 12 19:18:13.614286 systemd-logind[1208]: Removed session 12.
Feb 12 19:18:18.613073 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:37408.service.
Feb 12 19:18:18.658321 sshd[3264]: Accepted publickey for core from 10.0.0.1 port 37408 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:18.659699 sshd[3264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:18.663429 systemd-logind[1208]: New session 13 of user core.
Feb 12 19:18:18.664293 systemd[1]: Started session-13.scope.
Feb 12 19:18:18.787209 sshd[3264]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:18.789724 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:37408.service: Deactivated successfully.
Feb 12 19:18:18.790879 systemd-logind[1208]: Session 13 logged out. Waiting for processes to exit.
Feb 12 19:18:18.790907 systemd[1]: session-13.scope: Deactivated successfully.
Feb 12 19:18:18.791879 systemd-logind[1208]: Removed session 13.
Feb 12 19:18:23.790054 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:59956.service.
Feb 12 19:18:23.834844 sshd[3296]: Accepted publickey for core from 10.0.0.1 port 59956 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:23.836494 sshd[3296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:23.840112 systemd-logind[1208]: New session 14 of user core.
Feb 12 19:18:23.841021 systemd[1]: Started session-14.scope.
Feb 12 19:18:23.963157 sshd[3296]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:23.965568 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:59956.service: Deactivated successfully.
Feb 12 19:18:23.966551 systemd-logind[1208]: Session 14 logged out. Waiting for processes to exit.
Feb 12 19:18:23.966630 systemd[1]: session-14.scope: Deactivated successfully.
Feb 12 19:18:23.967401 systemd-logind[1208]: Removed session 14.
Feb 12 19:18:28.966274 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:59962.service.
Feb 12 19:18:29.010948 sshd[3330]: Accepted publickey for core from 10.0.0.1 port 59962 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:29.012365 sshd[3330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:29.016214 systemd-logind[1208]: New session 15 of user core.
Feb 12 19:18:29.016568 systemd[1]: Started session-15.scope.
Feb 12 19:18:29.126058 sshd[3330]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:29.128834 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:59962.service: Deactivated successfully.
Feb 12 19:18:29.130105 systemd[1]: session-15.scope: Deactivated successfully.
Feb 12 19:18:29.130116 systemd-logind[1208]: Session 15 logged out. Waiting for processes to exit.
Feb 12 19:18:29.130974 systemd-logind[1208]: Removed session 15.
Feb 12 19:18:34.128855 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:49366.service.
Feb 12 19:18:34.173730 sshd[3362]: Accepted publickey for core from 10.0.0.1 port 49366 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:34.175480 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:34.179312 systemd-logind[1208]: New session 16 of user core.
Feb 12 19:18:34.179801 systemd[1]: Started session-16.scope.
Feb 12 19:18:34.286659 sshd[3362]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:34.289139 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:49366.service: Deactivated successfully.
Feb 12 19:18:34.290219 systemd[1]: session-16.scope: Deactivated successfully.
Feb 12 19:18:34.290223 systemd-logind[1208]: Session 16 logged out. Waiting for processes to exit.
Feb 12 19:18:34.291073 systemd-logind[1208]: Removed session 16.
Feb 12 19:18:39.289368 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:49370.service.
Feb 12 19:18:39.333377 sshd[3394]: Accepted publickey for core from 10.0.0.1 port 49370 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:39.334980 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:39.338765 systemd-logind[1208]: New session 17 of user core.
Feb 12 19:18:39.339268 systemd[1]: Started session-17.scope.
Feb 12 19:18:39.445400 sshd[3394]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:39.447675 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:49370.service: Deactivated successfully.
Feb 12 19:18:39.448857 systemd[1]: session-17.scope: Deactivated successfully.
Feb 12 19:18:39.448874 systemd-logind[1208]: Session 17 logged out. Waiting for processes to exit.
Feb 12 19:18:39.449888 systemd-logind[1208]: Removed session 17.
Feb 12 19:18:43.538298 kubelet[2123]: E0212 19:18:43.538269    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:18:44.451687 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:42710.service.
Feb 12 19:18:44.495708 sshd[3426]: Accepted publickey for core from 10.0.0.1 port 42710 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:44.497343 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:44.503461 systemd-logind[1208]: New session 18 of user core.
Feb 12 19:18:44.504068 systemd[1]: Started session-18.scope.
Feb 12 19:18:44.538274 kubelet[2123]: E0212 19:18:44.538241    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:18:44.624520 sshd[3426]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:44.626844 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:42710.service: Deactivated successfully.
Feb 12 19:18:44.627942 systemd-logind[1208]: Session 18 logged out. Waiting for processes to exit.
Feb 12 19:18:44.627982 systemd[1]: session-18.scope: Deactivated successfully.
Feb 12 19:18:44.629138 systemd-logind[1208]: Removed session 18.
Feb 12 19:18:46.538229 kubelet[2123]: E0212 19:18:46.538194    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:18:49.627803 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:42722.service.
Feb 12 19:18:49.677963 sshd[3458]: Accepted publickey for core from 10.0.0.1 port 42722 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:49.679375 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:49.684192 systemd[1]: Started session-19.scope.
Feb 12 19:18:49.685128 systemd-logind[1208]: New session 19 of user core.
Feb 12 19:18:49.823423 sshd[3458]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:49.826638 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:42736.service.
Feb 12 19:18:49.828918 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:42722.service: Deactivated successfully.
Feb 12 19:18:49.830533 systemd[1]: session-19.scope: Deactivated successfully.
Feb 12 19:18:49.831285 systemd-logind[1208]: Session 19 logged out. Waiting for processes to exit.
Feb 12 19:18:49.832643 systemd-logind[1208]: Removed session 19.
Feb 12 19:18:49.873560 sshd[3472]: Accepted publickey for core from 10.0.0.1 port 42736 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:49.874855 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:49.880221 systemd[1]: Started session-20.scope.
Feb 12 19:18:49.881387 systemd-logind[1208]: New session 20 of user core.
Feb 12 19:18:50.080440 sshd[3472]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:50.084258 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:42738.service.
Feb 12 19:18:50.088149 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:42736.service: Deactivated successfully.
Feb 12 19:18:50.090200 systemd-logind[1208]: Session 20 logged out. Waiting for processes to exit.
Feb 12 19:18:50.090784 systemd[1]: session-20.scope: Deactivated successfully.
Feb 12 19:18:50.092379 systemd-logind[1208]: Removed session 20.
Feb 12 19:18:50.129643 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 42738 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:50.130870 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:50.136036 systemd[1]: Started session-21.scope.
Feb 12 19:18:50.136450 systemd-logind[1208]: New session 21 of user core.
Feb 12 19:18:50.902295 systemd[1]: Started sshd@21-10.0.0.59:22-10.0.0.1:42750.service.
Feb 12 19:18:50.903693 sshd[3490]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:50.908433 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:42738.service: Deactivated successfully.
Feb 12 19:18:50.909544 systemd[1]: session-21.scope: Deactivated successfully.
Feb 12 19:18:50.915126 systemd-logind[1208]: Session 21 logged out. Waiting for processes to exit.
Feb 12 19:18:50.918332 systemd-logind[1208]: Removed session 21.
Feb 12 19:18:50.951458 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 42750 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:50.952702 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:50.956261 systemd-logind[1208]: New session 22 of user core.
Feb 12 19:18:50.957047 systemd[1]: Started session-22.scope.
Feb 12 19:18:51.133247 sshd[3521]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:51.135447 systemd[1]: Started sshd@22-10.0.0.59:22-10.0.0.1:42758.service.
Feb 12 19:18:51.147671 systemd[1]: sshd@21-10.0.0.59:22-10.0.0.1:42750.service: Deactivated successfully.
Feb 12 19:18:51.148865 systemd-logind[1208]: Session 22 logged out. Waiting for processes to exit.
Feb 12 19:18:51.148909 systemd[1]: session-22.scope: Deactivated successfully.
Feb 12 19:18:51.150049 systemd-logind[1208]: Removed session 22.
Feb 12 19:18:51.186494 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 42758 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:51.187260 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:51.191022 systemd-logind[1208]: New session 23 of user core.
Feb 12 19:18:51.191831 systemd[1]: Started session-23.scope.
Feb 12 19:18:51.299213 sshd[3571]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:51.301816 systemd[1]: sshd@22-10.0.0.59:22-10.0.0.1:42758.service: Deactivated successfully.
Feb 12 19:18:51.302791 systemd-logind[1208]: Session 23 logged out. Waiting for processes to exit.
Feb 12 19:18:51.302839 systemd[1]: session-23.scope: Deactivated successfully.
Feb 12 19:18:51.303851 systemd-logind[1208]: Removed session 23.
Feb 12 19:18:55.538210 kubelet[2123]: E0212 19:18:55.538171    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:18:56.302030 systemd[1]: Started sshd@23-10.0.0.59:22-10.0.0.1:43364.service.
Feb 12 19:18:56.354770 sshd[3605]: Accepted publickey for core from 10.0.0.1 port 43364 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:18:56.356645 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:18:56.361900 systemd-logind[1208]: New session 24 of user core.
Feb 12 19:18:56.368895 systemd[1]: Started session-24.scope.
Feb 12 19:18:56.487244 sshd[3605]: pam_unix(sshd:session): session closed for user core
Feb 12 19:18:56.490837 systemd[1]: sshd@23-10.0.0.59:22-10.0.0.1:43364.service: Deactivated successfully.
Feb 12 19:18:56.491189 systemd-logind[1208]: Session 24 logged out. Waiting for processes to exit.
Feb 12 19:18:56.491679 systemd[1]: session-24.scope: Deactivated successfully.
Feb 12 19:18:56.492137 systemd-logind[1208]: Removed session 24.
Feb 12 19:18:56.538656 kubelet[2123]: E0212 19:18:56.538209    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:18:59.537651 kubelet[2123]: E0212 19:18:59.537586    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 12 19:19:01.490391 systemd[1]: Started sshd@24-10.0.0.59:22-10.0.0.1:43370.service.
Feb 12 19:19:01.538533 sshd[3666]: Accepted publickey for core from 10.0.0.1 port 43370 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:19:01.539314 sshd[3666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:19:01.543693 systemd-logind[1208]: New session 25 of user core.
Feb 12 19:19:01.544668 systemd[1]: Started session-25.scope.
Feb 12 19:19:01.659703 sshd[3666]: pam_unix(sshd:session): session closed for user core
Feb 12 19:19:01.662059 systemd[1]: sshd@24-10.0.0.59:22-10.0.0.1:43370.service: Deactivated successfully.
Feb 12 19:19:01.663025 systemd-logind[1208]: Session 25 logged out. Waiting for processes to exit.
Feb 12 19:19:01.663093 systemd[1]: session-25.scope: Deactivated successfully.
Feb 12 19:19:01.663726 systemd-logind[1208]: Removed session 25.
Feb 12 19:19:06.662546 systemd[1]: Started sshd@25-10.0.0.59:22-10.0.0.1:56212.service.
Feb 12 19:19:06.710191 sshd[3698]: Accepted publickey for core from 10.0.0.1 port 56212 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:19:06.711229 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:19:06.715735 systemd[1]: Started session-26.scope.
Feb 12 19:19:06.715938 systemd-logind[1208]: New session 26 of user core.
Feb 12 19:19:06.834126 sshd[3698]: pam_unix(sshd:session): session closed for user core
Feb 12 19:19:06.836724 systemd[1]: sshd@25-10.0.0.59:22-10.0.0.1:56212.service: Deactivated successfully.
Feb 12 19:19:06.837762 systemd[1]: session-26.scope: Deactivated successfully.
Feb 12 19:19:06.838108 systemd-logind[1208]: Session 26 logged out. Waiting for processes to exit.
Feb 12 19:19:06.838822 systemd-logind[1208]: Removed session 26.
Feb 12 19:19:11.836994 systemd[1]: Started sshd@26-10.0.0.59:22-10.0.0.1:56218.service.
Feb 12 19:19:11.888242 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 56218 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU
Feb 12 19:19:11.889483 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 19:19:11.893578 systemd-logind[1208]: New session 27 of user core.
Feb 12 19:19:11.894334 systemd[1]: Started session-27.scope.
Feb 12 19:19:12.008711 sshd[3742]: pam_unix(sshd:session): session closed for user core
Feb 12 19:19:12.011054 systemd[1]: sshd@26-10.0.0.59:22-10.0.0.1:56218.service: Deactivated successfully.
Feb 12 19:19:12.012313 systemd-logind[1208]: Session 27 logged out. Waiting for processes to exit.
Feb 12 19:19:12.012495 systemd[1]: session-27.scope: Deactivated successfully.
Feb 12 19:19:12.013902 systemd-logind[1208]: Removed session 27.
Feb 12 19:19:12.538157 kubelet[2123]: E0212 19:19:12.538117    2123 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"