Feb 12 19:07:49.738867 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:07:49.738901 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:07:49.738909 kernel: efi: EFI v2.70 by EDK II Feb 12 19:07:49.738915 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 12 19:07:49.738920 kernel: random: crng init done Feb 12 19:07:49.738926 kernel: ACPI: Early table checksum verification disabled Feb 12 19:07:49.738932 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 12 19:07:49.738939 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:07:49.738944 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:07:49.738950 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:07:49.738955 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:07:49.738960 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:07:49.738966 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:07:49.738971 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:07:49.738979 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:07:49.738984 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:07:49.738990 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:07:49.738996 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 12 19:07:49.739001 kernel: NUMA: Failed to initialise from firmware Feb 12 19:07:49.739007 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:07:49.739013 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 12 19:07:49.739018 kernel: Zone ranges: Feb 12 19:07:49.739024 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:07:49.739031 kernel: DMA32 empty Feb 12 19:07:49.739037 kernel: Normal empty Feb 12 19:07:49.739042 kernel: Movable zone start for each node Feb 12 19:07:49.739048 kernel: Early memory node ranges Feb 12 19:07:49.739053 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 12 19:07:49.739062 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 12 19:07:49.739067 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 12 19:07:49.739073 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 12 19:07:49.739079 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 12 19:07:49.739084 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 12 19:07:49.739090 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 12 19:07:49.739096 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:07:49.739102 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 12 19:07:49.739108 kernel: psci: probing for conduit method from ACPI. Feb 12 19:07:49.739114 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:07:49.739119 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:07:49.739125 kernel: psci: Trusted OS migration not required Feb 12 19:07:49.739133 kernel: psci: SMC Calling Convention v1.1 Feb 12 19:07:49.739139 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 12 19:07:49.739147 kernel: ACPI: SRAT not present Feb 12 19:07:49.739153 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:07:49.739159 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:07:49.739165 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 12 19:07:49.739171 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:07:49.739178 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:07:49.739184 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:07:49.739190 kernel: CPU features: detected: Spectre-v4 Feb 12 19:07:49.739196 kernel: CPU features: detected: Spectre-BHB Feb 12 19:07:49.739203 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:07:49.739209 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:07:49.739215 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:07:49.739221 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 12 19:07:49.739227 kernel: Policy zone: DMA Feb 12 19:07:49.739234 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:07:49.739241 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:07:49.739247 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:07:49.739253 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:07:49.739260 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:07:49.739266 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 12 19:07:49.739274 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:07:49.739280 kernel: trace event string verifier disabled Feb 12 19:07:49.739296 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:07:49.739303 kernel: rcu: RCU event tracing is enabled. Feb 12 19:07:49.739312 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:07:49.739323 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:07:49.739329 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:07:49.739343 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:07:49.739350 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:07:49.739364 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:07:49.739374 kernel: GICv3: 256 SPIs implemented Feb 12 19:07:49.739383 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:07:49.739389 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:07:49.739395 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:07:49.739401 kernel: GICv3: 16 PPIs implemented Feb 12 19:07:49.739407 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 12 19:07:49.739413 kernel: ACPI: SRAT not present Feb 12 19:07:49.739419 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 12 19:07:49.739425 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 19:07:49.739431 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 12 19:07:49.739437 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 12 19:07:49.739443 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 12 19:07:49.739449 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:07:49.739457 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:07:49.739463 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:07:49.739469 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:07:49.739475 kernel: arm-pv: using stolen time PV Feb 12 19:07:49.739482 kernel: Console: colour dummy device 80x25 Feb 12 19:07:49.739488 kernel: ACPI: Core revision 20210730 Feb 12 19:07:49.739494 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:07:49.739501 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:07:49.739514 kernel: LSM: Security Framework initializing Feb 12 19:07:49.739521 kernel: SELinux: Initializing. Feb 12 19:07:49.739529 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:07:49.739535 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:07:49.739542 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:07:49.739548 kernel: Platform MSI: ITS@0x8080000 domain created Feb 12 19:07:49.739554 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 12 19:07:49.739560 kernel: Remapping and enabling EFI services. Feb 12 19:07:49.739566 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:07:49.739572 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:07:49.739579 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 12 19:07:49.739586 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 12 19:07:49.739593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:07:49.739599 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:07:49.739606 kernel: Detected PIPT I-cache on CPU2 Feb 12 19:07:49.739612 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 12 19:07:49.739618 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 12 19:07:49.739625 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:07:49.739631 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 12 19:07:49.739638 kernel: Detected PIPT I-cache on CPU3 Feb 12 19:07:49.739644 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 12 19:07:49.739651 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 12 19:07:49.739658 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:07:49.739664 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 12 19:07:49.739671 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:07:49.739681 kernel: SMP: Total of 4 processors activated. Feb 12 19:07:49.739689 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:07:49.739695 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:07:49.739702 kernel: CPU features: detected: Common not Private translations Feb 12 19:07:49.739708 kernel: CPU features: detected: CRC32 instructions Feb 12 19:07:49.739715 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:07:49.739721 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:07:49.739728 kernel: CPU features: detected: Privileged Access Never Feb 12 19:07:49.739735 kernel: CPU features: detected: RAS Extension Support Feb 12 19:07:49.739742 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 12 19:07:49.739748 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:07:49.739755 kernel: alternatives: patching kernel code Feb 12 19:07:49.739763 kernel: devtmpfs: initialized Feb 12 19:07:49.739769 kernel: KASLR enabled Feb 12 19:07:49.739777 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:07:49.739784 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:07:49.739791 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:07:49.739798 kernel: SMBIOS 3.0.0 present. Feb 12 19:07:49.739805 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 12 19:07:49.739811 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:07:49.739818 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:07:49.739825 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:07:49.739833 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:07:49.739839 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:07:49.739846 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Feb 12 19:07:49.739866 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:07:49.739873 kernel: cpuidle: using governor menu Feb 12 19:07:49.739880 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:07:49.739887 kernel: ASID allocator initialised with 32768 entries Feb 12 19:07:49.739893 kernel: ACPI: bus type PCI registered Feb 12 19:07:49.739900 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:07:49.739908 kernel: Serial: AMBA PL011 UART driver Feb 12 19:07:49.739914 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:07:49.739921 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:07:49.739928 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:07:49.739934 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:07:49.739941 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:07:49.739947 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:07:49.739954 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:07:49.739960 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:07:49.739968 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:07:49.739975 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:07:49.739981 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:07:49.739988 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:07:49.739994 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:07:49.740001 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:07:49.740007 kernel: ACPI: Interpreter enabled Feb 12 19:07:49.740013 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:07:49.740020 kernel: ACPI: MCFG table detected, 1 entries Feb 12 19:07:49.740028 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:07:49.740034 kernel: printk: console [ttyAMA0] enabled Feb 12 19:07:49.740041 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:07:49.740162 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:07:49.740226 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 19:07:49.740284 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 19:07:49.740341 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 12 19:07:49.740416 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 12 19:07:49.740426 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 12 19:07:49.740433 kernel: PCI host bridge to bus 0000:00 Feb 12 19:07:49.740499 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 12 19:07:49.740569 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 19:07:49.740622 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 12 19:07:49.740675 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:07:49.740753 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 12 19:07:49.740829 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:07:49.740892 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 12 19:07:49.740952 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 12 19:07:49.741012 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:07:49.741071 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:07:49.741130 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 12 19:07:49.741193 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 12 19:07:49.741260 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 12 19:07:49.741313 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 19:07:49.741365 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 12 19:07:49.741384 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 19:07:49.741405 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 19:07:49.741412 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 19:07:49.741420 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 19:07:49.741427 kernel: iommu: Default domain type: Translated Feb 12 19:07:49.741434 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:07:49.741440 kernel: vgaarb: loaded Feb 12 19:07:49.741447 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:07:49.741453 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:07:49.741460 kernel: PTP clock support registered Feb 12 19:07:49.741467 kernel: Registered efivars operations Feb 12 19:07:49.741473 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:07:49.741482 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:07:49.741489 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:07:49.741495 kernel: pnp: PnP ACPI init Feb 12 19:07:49.741612 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 12 19:07:49.741624 kernel: pnp: PnP ACPI: found 1 devices Feb 12 19:07:49.741631 kernel: NET: Registered PF_INET protocol family Feb 12 19:07:49.741638 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:07:49.741645 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:07:49.741652 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:07:49.741661 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:07:49.741668 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:07:49.741675 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:07:49.741683 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:07:49.741689 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:07:49.741696 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:07:49.741703 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:07:49.741710 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 12 19:07:49.741718 kernel: kvm [1]: HYP mode not available Feb 12 19:07:49.741725 kernel: Initialise system trusted keyrings Feb 12 19:07:49.741732 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:07:49.741738 kernel: Key type asymmetric registered Feb 12 19:07:49.741745 kernel: Asymmetric key parser 'x509' registered Feb 12 19:07:49.741752 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:07:49.741758 kernel: io scheduler mq-deadline registered Feb 12 19:07:49.741765 kernel: io scheduler kyber registered Feb 12 19:07:49.741771 kernel: io scheduler bfq registered Feb 12 19:07:49.741778 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 19:07:49.741785 kernel: ACPI: button: Power Button [PWRB] Feb 12 19:07:49.741793 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 19:07:49.741858 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 12 19:07:49.741867 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:07:49.741874 kernel: thunder_xcv, ver 1.0 Feb 12 19:07:49.741881 kernel: thunder_bgx, ver 1.0 Feb 12 19:07:49.741887 kernel: nicpf, ver 1.0 Feb 12 19:07:49.741894 kernel: nicvf, ver 1.0 Feb 12 19:07:49.742207 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:07:49.742279 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:07:49 UTC (1707764869) Feb 12 19:07:49.742288 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:07:49.742295 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:07:49.742301 kernel: Segment Routing with IPv6 Feb 12 19:07:49.742308 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:07:49.742314 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:07:49.742321 kernel: Key type dns_resolver registered Feb 12 19:07:49.742328 kernel: registered taskstats version 1 Feb 12 19:07:49.742336 kernel: Loading compiled-in X.509 certificates Feb 12 19:07:49.742343 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:07:49.742350 kernel: Key type .fscrypt registered Feb 12 19:07:49.742356 kernel: Key type fscrypt-provisioning registered Feb 12 19:07:49.742363 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:07:49.742383 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:07:49.742392 kernel: ima: No architecture policies found Feb 12 19:07:49.742398 kernel: Freeing unused kernel memory: 34688K Feb 12 19:07:49.742407 kernel: Run /init as init process Feb 12 19:07:49.742414 kernel: with arguments: Feb 12 19:07:49.742420 kernel: /init Feb 12 19:07:49.742427 kernel: with environment: Feb 12 19:07:49.742433 kernel: HOME=/ Feb 12 19:07:49.742439 kernel: TERM=linux Feb 12 19:07:49.742446 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:07:49.742455 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:07:49.742464 systemd[1]: Detected virtualization kvm. Feb 12 19:07:49.742472 systemd[1]: Detected architecture arm64. Feb 12 19:07:49.742479 systemd[1]: Running in initrd. Feb 12 19:07:49.742486 systemd[1]: No hostname configured, using default hostname. Feb 12 19:07:49.742493 systemd[1]: Hostname set to . Feb 12 19:07:49.742500 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:07:49.742516 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:07:49.742523 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:07:49.742530 systemd[1]: Reached target cryptsetup.target. Feb 12 19:07:49.742539 systemd[1]: Reached target paths.target. Feb 12 19:07:49.742546 systemd[1]: Reached target slices.target. Feb 12 19:07:49.742553 systemd[1]: Reached target swap.target. Feb 12 19:07:49.742561 systemd[1]: Reached target timers.target. Feb 12 19:07:49.742568 systemd[1]: Listening on iscsid.socket. Feb 12 19:07:49.742575 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:07:49.742583 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:07:49.742591 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:07:49.742598 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:07:49.742605 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:07:49.742617 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:07:49.742624 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:07:49.742631 systemd[1]: Reached target sockets.target. Feb 12 19:07:49.742638 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:07:49.742645 systemd[1]: Finished network-cleanup.service. Feb 12 19:07:49.742652 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:07:49.742661 systemd[1]: Starting systemd-journald.service... Feb 12 19:07:49.742668 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:07:49.742675 systemd[1]: Starting systemd-resolved.service... Feb 12 19:07:49.742682 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:07:49.742689 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:07:49.742696 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:07:49.742703 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:07:49.742710 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:07:49.742718 kernel: audit: type=1130 audit(1707764869.740:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.742727 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:07:49.742737 systemd-journald[291]: Journal started Feb 12 19:07:49.742777 systemd-journald[291]: Runtime Journal (/run/log/journal/5cac4c753e0b4b2ca27829a060326522) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:07:49.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.734895 systemd-modules-load[292]: Inserted module 'overlay' Feb 12 19:07:49.746500 systemd[1]: Started systemd-journald.service. Feb 12 19:07:49.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.749976 kernel: audit: type=1130 audit(1707764869.746:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.749681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:07:49.753466 kernel: audit: type=1130 audit(1707764869.750:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.756773 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:07:49.761986 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:07:49.766861 kernel: audit: type=1130 audit(1707764869.762:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.766881 kernel: Bridge firewalling registered Feb 12 19:07:49.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.763704 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:07:49.765627 systemd-modules-load[292]: Inserted module 'br_netfilter' Feb 12 19:07:49.773202 systemd-resolved[293]: Positive Trust Anchors: Feb 12 19:07:49.773216 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:07:49.773244 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:07:49.778255 systemd-resolved[293]: Defaulting to hostname 'linux'. Feb 12 19:07:49.780599 dracut-cmdline[307]: dracut-dracut-053 Feb 12 19:07:49.780599 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:07:49.787175 kernel: SCSI subsystem initialized Feb 12 19:07:49.787198 kernel: audit: type=1130 audit(1707764869.780:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.779064 systemd[1]: Started systemd-resolved.service. Feb 12 19:07:49.781261 systemd[1]: Reached target nss-lookup.target. Feb 12 19:07:49.789971 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:07:49.789988 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:07:49.791383 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:07:49.793723 systemd-modules-load[292]: Inserted module 'dm_multipath' Feb 12 19:07:49.794558 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:07:49.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.795987 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:07:49.798475 kernel: audit: type=1130 audit(1707764869.794:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.805191 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:07:49.809487 kernel: audit: type=1130 audit(1707764869.805:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.848391 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:07:49.856393 kernel: iscsi: registered transport (tcp) Feb 12 19:07:49.871498 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:07:49.871553 kernel: QLogic iSCSI HBA Driver Feb 12 19:07:49.900412 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:07:49.903385 kernel: audit: type=1130 audit(1707764869.900:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:49.902030 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:07:49.944395 kernel: raid6: neonx8 gen() 13795 MB/s Feb 12 19:07:49.961384 kernel: raid6: neonx8 xor() 10823 MB/s Feb 12 19:07:49.978382 kernel: raid6: neonx4 gen() 13565 MB/s Feb 12 19:07:49.995379 kernel: raid6: neonx4 xor() 11229 MB/s Feb 12 19:07:50.012385 kernel: raid6: neonx2 gen() 12971 MB/s Feb 12 19:07:50.029387 kernel: raid6: neonx2 xor() 10270 MB/s Feb 12 19:07:50.046384 kernel: raid6: neonx1 gen() 10501 MB/s Feb 12 19:07:50.063390 kernel: raid6: neonx1 xor() 8789 MB/s Feb 12 19:07:50.080382 kernel: raid6: int64x8 gen() 6285 MB/s Feb 12 19:07:50.097385 kernel: raid6: int64x8 xor() 3549 MB/s Feb 12 19:07:50.114390 kernel: raid6: int64x4 gen() 7227 MB/s Feb 12 19:07:50.131385 kernel: raid6: int64x4 xor() 3849 MB/s Feb 12 19:07:50.148392 kernel: raid6: int64x2 gen() 6149 MB/s Feb 12 19:07:50.165394 kernel: raid6: int64x2 xor() 3316 MB/s Feb 12 19:07:50.182384 kernel: raid6: int64x1 gen() 5043 MB/s Feb 12 19:07:50.199577 kernel: raid6: int64x1 xor() 2644 MB/s Feb 12 19:07:50.199592 kernel: raid6: using algorithm neonx8 gen() 13795 MB/s Feb 12 19:07:50.199601 kernel: raid6: .... xor() 10823 MB/s, rmw enabled Feb 12 19:07:50.199609 kernel: raid6: using neon recovery algorithm Feb 12 19:07:50.210615 kernel: xor: measuring software checksum speed Feb 12 19:07:50.210630 kernel: 8regs : 17297 MB/sec Feb 12 19:07:50.211444 kernel: 32regs : 20755 MB/sec Feb 12 19:07:50.212602 kernel: arm64_neon : 27959 MB/sec Feb 12 19:07:50.212614 kernel: xor: using function: arm64_neon (27959 MB/sec) Feb 12 19:07:50.271389 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:07:50.281933 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:07:50.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:50.285416 kernel: audit: type=1130 audit(1707764870.282:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:50.284000 audit: BPF prog-id=7 op=LOAD Feb 12 19:07:50.284000 audit: BPF prog-id=8 op=LOAD Feb 12 19:07:50.285795 systemd[1]: Starting systemd-udevd.service... Feb 12 19:07:50.299964 systemd-udevd[490]: Using default interface naming scheme 'v252'. Feb 12 19:07:50.303300 systemd[1]: Started systemd-udevd.service. Feb 12 19:07:50.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:50.305385 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:07:50.318238 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Feb 12 19:07:50.347444 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:07:50.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:50.349067 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:07:50.392126 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:07:50.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:50.419442 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:07:50.423557 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:07:50.423597 kernel: GPT:9289727 != 19775487 Feb 12 19:07:50.423607 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:07:50.423616 kernel: GPT:9289727 != 19775487 Feb 12 19:07:50.424758 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:07:50.424781 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:07:50.441208 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:07:50.443351 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (554) Feb 12 19:07:50.442033 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:07:50.448383 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:07:50.451585 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:07:50.454953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:07:50.458494 systemd[1]: Starting disk-uuid.service... Feb 12 19:07:50.464958 disk-uuid[563]: Primary Header is updated. Feb 12 19:07:50.464958 disk-uuid[563]: Secondary Entries is updated. Feb 12 19:07:50.464958 disk-uuid[563]: Secondary Header is updated. Feb 12 19:07:50.469394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:07:50.481395 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:07:51.482399 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:07:51.482530 disk-uuid[564]: The operation has completed successfully. Feb 12 19:07:51.509719 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:07:51.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.509814 systemd[1]: Finished disk-uuid.service. Feb 12 19:07:51.511367 systemd[1]: Starting verity-setup.service... Feb 12 19:07:51.533420 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:07:51.559246 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:07:51.560939 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:07:51.565572 systemd[1]: Finished verity-setup.service. Feb 12 19:07:51.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.613198 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:07:51.614457 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:07:51.614084 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:07:51.614816 systemd[1]: Starting ignition-setup.service... Feb 12 19:07:51.617200 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:07:51.624926 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:07:51.624983 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:07:51.624993 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:07:51.634019 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:07:51.648769 systemd[1]: Finished ignition-setup.service. Feb 12 19:07:51.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.650249 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:07:51.693313 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:07:51.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.694000 audit: BPF prog-id=9 op=LOAD Feb 12 19:07:51.695424 systemd[1]: Starting systemd-networkd.service... Feb 12 19:07:51.720683 systemd-networkd[740]: lo: Link UP Feb 12 19:07:51.720692 systemd-networkd[740]: lo: Gained carrier Feb 12 19:07:51.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.721199 systemd-networkd[740]: Enumeration completed Feb 12 19:07:51.721367 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:07:51.721494 systemd[1]: Started systemd-networkd.service. Feb 12 19:07:51.722221 systemd[1]: Reached target network.target. Feb 12 19:07:51.723859 systemd[1]: Starting iscsiuio.service... Feb 12 19:07:51.725447 systemd-networkd[740]: eth0: Link UP Feb 12 19:07:51.725451 systemd-networkd[740]: eth0: Gained carrier Feb 12 19:07:51.734135 systemd[1]: Started iscsiuio.service. Feb 12 19:07:51.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.735663 systemd[1]: Starting iscsid.service... Feb 12 19:07:51.739518 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:07:51.739518 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:07:51.739518 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:07:51.739518 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:07:51.739518 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:07:51.739518 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:07:51.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.743913 systemd[1]: Started iscsid.service. Feb 12 19:07:51.745806 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:07:51.753513 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:07:51.758434 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:07:51.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.759470 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:07:51.758683 ignition[667]: Ignition 2.14.0 Feb 12 19:07:51.760557 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:07:51.758690 ignition[667]: Stage: fetch-offline Feb 12 19:07:51.761838 systemd[1]: Reached target remote-fs.target. Feb 12 19:07:51.758728 ignition[667]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:07:51.763715 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:07:51.758737 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:07:51.758862 ignition[667]: parsed url from cmdline: "" Feb 12 19:07:51.758865 ignition[667]: no config URL provided Feb 12 19:07:51.758870 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:07:51.758877 ignition[667]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:07:51.758895 ignition[667]: op(1): [started] loading QEMU firmware config module Feb 12 19:07:51.758899 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:07:51.768319 ignition[667]: op(1): [finished] loading QEMU firmware config module Feb 12 19:07:51.772449 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:07:51.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.812721 ignition[667]: parsing config with SHA512: 19390ec11821b399a08332fe772896d26d5419c4e5021f006ce00ac97c15a2e2bb8e3cf1cd05504616d1ecbc7129e3d61f67f43385c84b391fa327560312f327 Feb 12 19:07:51.847077 unknown[667]: fetched base config from "system" Feb 12 19:07:51.847088 unknown[667]: fetched user config from "qemu" Feb 12 19:07:51.847670 ignition[667]: fetch-offline: fetch-offline passed Feb 12 19:07:51.847729 ignition[667]: Ignition finished successfully Feb 12 19:07:51.850313 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:07:51.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.851094 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:07:51.851861 systemd[1]: Starting ignition-kargs.service... Feb 12 19:07:51.860291 ignition[761]: Ignition 2.14.0 Feb 12 19:07:51.860305 ignition[761]: Stage: kargs Feb 12 19:07:51.860418 ignition[761]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:07:51.860427 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:07:51.861456 ignition[761]: kargs: kargs passed Feb 12 19:07:51.861509 ignition[761]: Ignition finished successfully Feb 12 19:07:51.864576 systemd[1]: Finished ignition-kargs.service. Feb 12 19:07:51.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.866078 systemd[1]: Starting ignition-disks.service... Feb 12 19:07:51.872721 ignition[767]: Ignition 2.14.0 Feb 12 19:07:51.872730 ignition[767]: Stage: disks Feb 12 19:07:51.872820 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:07:51.872829 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:07:51.875077 systemd[1]: Finished ignition-disks.service. Feb 12 19:07:51.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.873917 ignition[767]: disks: disks passed Feb 12 19:07:51.876450 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:07:51.873962 ignition[767]: Ignition finished successfully Feb 12 19:07:51.877311 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:07:51.878193 systemd[1]: Reached target local-fs.target. Feb 12 19:07:51.879271 systemd[1]: Reached target sysinit.target. Feb 12 19:07:51.880192 systemd[1]: Reached target basic.target. Feb 12 19:07:51.882024 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:07:51.892778 systemd-fsck[775]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:07:51.896718 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:07:51.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.898286 systemd[1]: Mounting sysroot.mount... Feb 12 19:07:51.905057 systemd[1]: Mounted sysroot.mount. Feb 12 19:07:51.906135 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:07:51.905830 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:07:51.907865 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:07:51.908693 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:07:51.908733 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:07:51.908757 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:07:51.910748 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:07:51.912943 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:07:51.918340 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:07:51.923048 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:07:51.926931 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:07:51.930657 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:07:51.958459 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:07:51.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.960090 systemd[1]: Starting ignition-mount.service... Feb 12 19:07:51.961392 systemd[1]: Starting sysroot-boot.service... Feb 12 19:07:51.966359 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:07:51.974797 ignition[828]: INFO : Ignition 2.14.0 Feb 12 19:07:51.974797 ignition[828]: INFO : Stage: mount Feb 12 19:07:51.976046 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:07:51.976046 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:07:51.976046 ignition[828]: INFO : mount: mount passed Feb 12 19:07:51.977963 ignition[828]: INFO : Ignition finished successfully Feb 12 19:07:51.977725 systemd[1]: Finished ignition-mount.service. Feb 12 19:07:51.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:51.982224 systemd[1]: Finished sysroot-boot.service. Feb 12 19:07:51.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:52.570733 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:07:52.576404 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) Feb 12 19:07:52.577788 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:07:52.577806 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:07:52.577815 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:07:52.584567 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:07:52.585953 systemd[1]: Starting ignition-files.service... Feb 12 19:07:52.599852 ignition[856]: INFO : Ignition 2.14.0 Feb 12 19:07:52.599852 ignition[856]: INFO : Stage: files Feb 12 19:07:52.601122 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:07:52.601122 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:07:52.601122 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:07:52.604097 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:07:52.604097 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:07:52.607665 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:07:52.608663 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:07:52.609791 unknown[856]: wrote ssh authorized keys file for user: core Feb 12 19:07:52.610668 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:07:52.610668 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:07:52.610668 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 12 19:07:52.932955 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:07:53.075514 systemd-networkd[740]: eth0: Gained IPv6LL Feb 12 19:07:53.228992 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 12 19:07:53.231094 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:07:53.231094 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:07:53.231094 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:07:53.446091 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:07:53.598784 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 12 19:07:53.600941 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:07:53.600941 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:07:53.600941 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 12 19:07:53.798272 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:07:53.832655 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:07:53.834115 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:07:53.834115 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:07:53.883783 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:07:54.148167 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 12 19:07:54.150389 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:07:54.150389 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:07:54.150389 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:07:54.171756 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:07:54.745986 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 12 19:07:54.748158 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:07:54.748158 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:07:54.748158 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl: attempt #1 Feb 12 19:07:54.769878 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:07:55.180907 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 14be61ec35669a27acf2df0380afb85b9b42311d50ca1165718421c5f605df1119ec9ae314696a674051712e80deeaa65e62d2d62ed4d107fe99d0aaf419dafc Feb 12 19:07:55.180907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:07:55.180907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:07:55.180907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:07:55.180907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:07:55.180907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:07:55.180907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:07:55.180907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:07:55.180907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:07:55.180907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:07:55.180907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:07:55.180907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:07:55.200524 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:07:55.200524 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(13): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(13): op(14): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(13): op(14): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(13): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(15): [started] processing unit "prepare-critools.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(15): op(16): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(15): op(16): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:07:55.200524 ignition[856]: INFO : files: op(15): [finished] processing unit "prepare-critools.service" Feb 12 19:07:55.224177 ignition[856]: INFO : files: op(17): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:07:55.224177 ignition[856]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:07:55.224177 ignition[856]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:07:55.224177 ignition[856]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:07:55.224177 ignition[856]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:07:55.224177 ignition[856]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:07:55.224177 ignition[856]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:07:55.224177 ignition[856]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:07:55.235908 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:07:55.235931 kernel: audit: type=1130 audit(1707764875.232:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.236005 ignition[856]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:07:55.236005 ignition[856]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:07:55.236005 ignition[856]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:07:55.236005 ignition[856]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:07:55.236005 ignition[856]: INFO : files: files passed Feb 12 19:07:55.236005 ignition[856]: INFO : Ignition finished successfully Feb 12 19:07:55.246206 kernel: audit: type=1130 audit(1707764875.242:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.230935 systemd[1]: Finished ignition-files.service. Feb 12 19:07:55.233530 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:07:55.248265 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:07:55.253463 kernel: audit: type=1130 audit(1707764875.248:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.253491 kernel: audit: type=1131 audit(1707764875.248:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.237864 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:07:55.255268 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:07:55.238687 systemd[1]: Starting ignition-quench.service... Feb 12 19:07:55.241424 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:07:55.242618 systemd[1]: Reached target ignition-complete.target. Feb 12 19:07:55.246461 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:07:55.247824 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:07:55.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.247915 systemd[1]: Finished ignition-quench.service. Feb 12 19:07:55.266217 kernel: audit: type=1130 audit(1707764875.260:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.266239 kernel: audit: type=1131 audit(1707764875.260:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.259501 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:07:55.259595 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:07:55.260971 systemd[1]: Reached target initrd-fs.target. Feb 12 19:07:55.265674 systemd[1]: Reached target initrd.target. Feb 12 19:07:55.266893 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:07:55.267706 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:07:55.277813 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:07:55.281460 kernel: audit: type=1130 audit(1707764875.278:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.279392 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:07:55.287325 systemd[1]: Stopped target network.target. Feb 12 19:07:55.288147 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:07:55.289333 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:07:55.290646 systemd[1]: Stopped target timers.target. Feb 12 19:07:55.291690 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:07:55.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.291803 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:07:55.296093 kernel: audit: type=1131 audit(1707764875.292:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.295024 systemd[1]: Stopped target initrd.target. Feb 12 19:07:55.295731 systemd[1]: Stopped target basic.target. Feb 12 19:07:55.296816 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:07:55.298078 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:07:55.299182 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:07:55.300505 systemd[1]: Stopped target remote-fs.target. Feb 12 19:07:55.301707 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:07:55.302944 systemd[1]: Stopped target sysinit.target. Feb 12 19:07:55.304120 systemd[1]: Stopped target local-fs.target. Feb 12 19:07:55.305225 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:07:55.306407 systemd[1]: Stopped target swap.target. Feb 12 19:07:55.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.307512 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:07:55.312193 kernel: audit: type=1131 audit(1707764875.308:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.307632 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:07:55.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.309548 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:07:55.316557 kernel: audit: type=1131 audit(1707764875.312:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.311481 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:07:55.311637 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:07:55.313089 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:07:55.313227 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:07:55.316137 systemd[1]: Stopped target paths.target. Feb 12 19:07:55.317212 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:07:55.320416 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:07:55.321732 systemd[1]: Stopped target slices.target. Feb 12 19:07:55.323131 systemd[1]: Stopped target sockets.target. Feb 12 19:07:55.324394 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:07:55.324511 systemd[1]: Closed iscsid.socket. Feb 12 19:07:55.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.325526 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:07:55.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.325625 systemd[1]: Closed iscsiuio.socket. Feb 12 19:07:55.326627 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:07:55.326775 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:07:55.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.328065 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:07:55.328193 systemd[1]: Stopped ignition-files.service. Feb 12 19:07:55.329937 systemd[1]: Stopping ignition-mount.service... Feb 12 19:07:55.330858 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:07:55.331021 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:07:55.337467 ignition[897]: INFO : Ignition 2.14.0 Feb 12 19:07:55.337467 ignition[897]: INFO : Stage: umount Feb 12 19:07:55.337467 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:07:55.337467 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:07:55.337467 ignition[897]: INFO : umount: umount passed Feb 12 19:07:55.337467 ignition[897]: INFO : Ignition finished successfully Feb 12 19:07:55.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.333095 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:07:55.336660 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:07:55.338336 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:07:55.339365 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:07:55.339550 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:07:55.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.340677 systemd-networkd[740]: eth0: DHCPv6 lease lost Feb 12 19:07:55.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.341669 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:07:55.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.341841 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:07:55.346290 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:07:55.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.353000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:07:55.347240 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:07:55.347343 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:07:55.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.355000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:07:55.349069 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:07:55.349154 systemd[1]: Stopped ignition-mount.service. Feb 12 19:07:55.351243 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:07:55.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.351329 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:07:55.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.352732 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:07:55.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.352810 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:07:55.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.355144 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:07:55.355224 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:07:55.356852 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:07:55.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.356883 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:07:55.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.357702 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:07:55.357738 systemd[1]: Stopped ignition-disks.service. Feb 12 19:07:55.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.358744 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:07:55.358778 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:07:55.359826 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:07:55.359857 systemd[1]: Stopped ignition-setup.service. Feb 12 19:07:55.360770 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:07:55.360802 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:07:55.362570 systemd[1]: Stopping network-cleanup.service... Feb 12 19:07:55.363610 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:07:55.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.363663 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:07:55.364933 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:07:55.364976 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:07:55.366561 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:07:55.366605 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:07:55.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.368151 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:07:55.372177 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:07:55.375043 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:07:55.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.375127 systemd[1]: Stopped network-cleanup.service. Feb 12 19:07:55.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.378869 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:07:55.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.378979 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:07:55.380274 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:07:55.380309 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:07:55.381246 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:07:55.381277 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:07:55.382244 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:07:55.382285 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:07:55.383768 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:07:55.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.383807 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:07:55.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.384936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:07:55.384974 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:07:55.386849 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:07:55.387816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:07:55.387878 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:07:55.392266 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:07:55.392351 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:07:55.393228 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:07:55.394921 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:07:55.401366 systemd[1]: Switching root. Feb 12 19:07:55.419748 iscsid[745]: iscsid shutting down. Feb 12 19:07:55.420245 systemd-journald[291]: Journal stopped Feb 12 19:07:57.607145 systemd-journald[291]: Received SIGTERM from PID 1 (systemd). Feb 12 19:07:57.607204 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:07:57.607216 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:07:57.607226 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:07:57.607236 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:07:57.607250 kernel: SELinux: policy capability open_perms=1 Feb 12 19:07:57.607260 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:07:57.607270 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:07:57.607281 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:07:57.607292 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:07:57.607303 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:07:57.607313 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:07:57.607323 systemd[1]: Successfully loaded SELinux policy in 32.453ms. Feb 12 19:07:57.607339 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.073ms. Feb 12 19:07:57.607350 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:07:57.607362 systemd[1]: Detected virtualization kvm. Feb 12 19:07:57.607383 systemd[1]: Detected architecture arm64. Feb 12 19:07:57.607452 systemd[1]: Detected first boot. Feb 12 19:07:57.607466 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:07:57.607485 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:07:57.607496 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:07:57.607507 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:07:57.607522 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:07:57.607534 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:07:57.607545 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:07:57.607558 systemd[1]: Stopped iscsiuio.service. Feb 12 19:07:57.607570 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:07:57.607581 systemd[1]: Stopped iscsid.service. Feb 12 19:07:57.607591 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:07:57.607602 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:07:57.607612 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:07:57.607623 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:07:57.607634 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:07:57.607645 systemd[1]: Created slice system-getty.slice. Feb 12 19:07:57.607656 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:07:57.607666 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:07:57.607681 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:07:57.607692 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:07:57.607702 systemd[1]: Created slice user.slice. Feb 12 19:07:57.607712 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:07:57.607723 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:07:57.607758 systemd[1]: Set up automount boot.automount. Feb 12 19:07:57.607771 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:07:57.607781 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:07:57.607791 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:07:57.607802 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:07:57.607813 systemd[1]: Reached target integritysetup.target. Feb 12 19:07:57.607825 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:07:57.607836 systemd[1]: Reached target remote-fs.target. Feb 12 19:07:57.607846 systemd[1]: Reached target slices.target. Feb 12 19:07:57.607858 systemd[1]: Reached target swap.target. Feb 12 19:07:57.607868 systemd[1]: Reached target torcx.target. Feb 12 19:07:57.607878 systemd[1]: Reached target veritysetup.target. Feb 12 19:07:57.607889 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:07:57.607899 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:07:57.607909 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:07:57.607919 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:07:57.607937 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:07:57.607949 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:07:57.607961 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:07:57.607971 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:07:57.607982 systemd[1]: Mounting media.mount... Feb 12 19:07:57.607992 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:07:57.608002 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:07:57.608013 systemd[1]: Mounting tmp.mount... Feb 12 19:07:57.608023 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:07:57.608033 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:07:57.608044 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:07:57.608056 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:07:57.608070 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:07:57.608080 systemd[1]: Starting modprobe@drm.service... Feb 12 19:07:57.608091 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:07:57.608101 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:07:57.608112 systemd[1]: Starting modprobe@loop.service... Feb 12 19:07:57.608123 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:07:57.608133 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:07:57.608144 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:07:57.608155 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:07:57.608165 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:07:57.608175 systemd[1]: Stopped systemd-journald.service. Feb 12 19:07:57.608185 kernel: loop: module loaded Feb 12 19:07:57.608196 systemd[1]: Starting systemd-journald.service... Feb 12 19:07:57.608209 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:07:57.608219 kernel: fuse: init (API version 7.34) Feb 12 19:07:57.608228 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:07:57.608239 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:07:57.608249 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:07:57.608260 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:07:57.608271 systemd[1]: Stopped verity-setup.service. Feb 12 19:07:57.608283 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:07:57.608297 systemd-journald[996]: Journal started Feb 12 19:07:57.608339 systemd-journald[996]: Runtime Journal (/run/log/journal/5cac4c753e0b4b2ca27829a060326522) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:07:55.498000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:07:55.719000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:07:55.719000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:07:55.719000 audit: BPF prog-id=10 op=LOAD Feb 12 19:07:55.719000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:07:55.719000 audit: BPF prog-id=11 op=LOAD Feb 12 19:07:55.719000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:07:55.760000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:07:55.760000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58ac a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:07:55.760000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:07:55.761000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:07:55.761000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5985 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:07:55.761000 audit: CWD cwd="/" Feb 12 19:07:55.761000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:07:55.761000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:07:55.761000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:07:57.484000 audit: BPF prog-id=12 op=LOAD Feb 12 19:07:57.484000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:07:57.485000 audit: BPF prog-id=13 op=LOAD Feb 12 19:07:57.485000 audit: BPF prog-id=14 op=LOAD Feb 12 19:07:57.485000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:07:57.485000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:07:57.485000 audit: BPF prog-id=15 op=LOAD Feb 12 19:07:57.485000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:07:57.485000 audit: BPF prog-id=16 op=LOAD Feb 12 19:07:57.485000 audit: BPF prog-id=17 op=LOAD Feb 12 19:07:57.485000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:07:57.485000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:07:57.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.497000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:07:57.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.587000 audit: BPF prog-id=18 op=LOAD Feb 12 19:07:57.587000 audit: BPF prog-id=19 op=LOAD Feb 12 19:07:57.587000 audit: BPF prog-id=20 op=LOAD Feb 12 19:07:57.587000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:07:57.587000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:07:57.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.605000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:07:57.605000 audit[996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffed93d560 a2=4000 a3=1 items=0 ppid=1 pid=996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:07:57.605000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:07:57.481447 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:07:55.758348 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:07:57.481460 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:07:55.758620 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:07:57.487073 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:07:55.758640 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:07:55.758673 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:07:55.758682 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:07:55.758712 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:07:55.758724 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:07:55.758921 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:07:55.758954 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:07:55.758966 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:07:57.610526 systemd[1]: Started systemd-journald.service. Feb 12 19:07:55.760406 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:07:57.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:55.760444 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:07:55.760464 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:07:55.760479 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:07:55.760510 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:07:57.610967 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:07:55.760524 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:07:57.192493 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:57Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:07:57.192759 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:57Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:07:57.192857 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:57Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:07:57.193011 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:57Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:07:57.193060 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:57Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:07:57.193119 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-12T19:07:57Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:07:57.611846 systemd[1]: Mounted media.mount. Feb 12 19:07:57.612430 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:07:57.613200 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:07:57.614107 systemd[1]: Mounted tmp.mount. Feb 12 19:07:57.615193 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:07:57.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.616235 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:07:57.616437 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:07:57.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.617423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:07:57.617574 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:07:57.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.618655 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:07:57.618800 systemd[1]: Finished modprobe@drm.service. Feb 12 19:07:57.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.619828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:07:57.619991 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:07:57.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.621136 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:07:57.621270 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:07:57.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.622254 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:07:57.623555 systemd[1]: Finished modprobe@loop.service. Feb 12 19:07:57.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.624589 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:07:57.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.625806 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:07:57.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.626916 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:07:57.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.628025 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:07:57.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.629280 systemd[1]: Reached target network-pre.target. Feb 12 19:07:57.631226 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:07:57.633140 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:07:57.633854 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:07:57.635570 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:07:57.638818 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:07:57.639662 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:07:57.640621 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:07:57.641518 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:07:57.642576 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:07:57.646569 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:07:57.649127 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:07:57.651777 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:07:57.653695 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:07:57.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.655649 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:07:57.662042 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:07:57.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.665022 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:07:57.668765 systemd-journald[996]: Time spent on flushing to /var/log/journal/5cac4c753e0b4b2ca27829a060326522 is 12.820ms for 1031 entries. Feb 12 19:07:57.668765 systemd-journald[996]: System Journal (/var/log/journal/5cac4c753e0b4b2ca27829a060326522) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:07:57.689186 systemd-journald[996]: Received client request to flush runtime journal. Feb 12 19:07:57.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:57.670391 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:07:57.671559 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:07:57.676419 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:07:57.690100 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:07:57.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.031122 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:07:58.032000 audit: BPF prog-id=21 op=LOAD Feb 12 19:07:58.032000 audit: BPF prog-id=22 op=LOAD Feb 12 19:07:58.032000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:07:58.032000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:07:58.033529 systemd[1]: Starting systemd-udevd.service... Feb 12 19:07:58.054358 systemd-udevd[1033]: Using default interface naming scheme 'v252'. Feb 12 19:07:58.067339 systemd[1]: Started systemd-udevd.service. Feb 12 19:07:58.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.069000 audit: BPF prog-id=23 op=LOAD Feb 12 19:07:58.077000 audit: BPF prog-id=24 op=LOAD Feb 12 19:07:58.077000 audit: BPF prog-id=25 op=LOAD Feb 12 19:07:58.077000 audit: BPF prog-id=26 op=LOAD Feb 12 19:07:58.073799 systemd[1]: Starting systemd-networkd.service... Feb 12 19:07:58.079159 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:07:58.092823 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 12 19:07:58.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.117090 systemd[1]: Started systemd-userdbd.service. Feb 12 19:07:58.128378 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:07:58.185793 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:07:58.186523 systemd-networkd[1052]: lo: Link UP Feb 12 19:07:58.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.186534 systemd-networkd[1052]: lo: Gained carrier Feb 12 19:07:58.186897 systemd-networkd[1052]: Enumeration completed Feb 12 19:07:58.187006 systemd-networkd[1052]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:07:58.187939 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:07:58.188788 systemd[1]: Started systemd-networkd.service. Feb 12 19:07:58.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.190130 systemd-networkd[1052]: eth0: Link UP Feb 12 19:07:58.190138 systemd-networkd[1052]: eth0: Gained carrier Feb 12 19:07:58.198011 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:07:58.213524 systemd-networkd[1052]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:07:58.235246 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:07:58.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.236113 systemd[1]: Reached target cryptsetup.target. Feb 12 19:07:58.237861 systemd[1]: Starting lvm2-activation.service... Feb 12 19:07:58.241448 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:07:58.277270 systemd[1]: Finished lvm2-activation.service. Feb 12 19:07:58.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.278049 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:07:58.278672 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:07:58.278699 systemd[1]: Reached target local-fs.target. Feb 12 19:07:58.279241 systemd[1]: Reached target machines.target. Feb 12 19:07:58.281165 systemd[1]: Starting ldconfig.service... Feb 12 19:07:58.282127 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:07:58.282200 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:07:58.283573 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:07:58.285532 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:07:58.287729 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:07:58.288585 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:07:58.288648 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:07:58.289708 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:07:58.292443 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) Feb 12 19:07:58.293629 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:07:58.307558 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:07:58.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.364800 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:07:58.370120 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:07:58.372394 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:07:58.372802 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:07:58.374296 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:07:58.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.391088 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) Feb 12 19:07:58.391088 systemd-fsck[1077]: /dev/vda1: 236 files, 113719/258078 clusters Feb 12 19:07:58.393717 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:07:58.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.397269 systemd[1]: Mounting boot.mount... Feb 12 19:07:58.406180 systemd[1]: Mounted boot.mount. Feb 12 19:07:58.417585 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:07:58.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.461573 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:07:58.466031 systemd[1]: Finished ldconfig.service. Feb 12 19:07:58.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.491273 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:07:58.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.493345 systemd[1]: Starting audit-rules.service... Feb 12 19:07:58.495060 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:07:58.497157 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:07:58.498000 audit: BPF prog-id=27 op=LOAD Feb 12 19:07:58.499718 systemd[1]: Starting systemd-resolved.service... Feb 12 19:07:58.500000 audit: BPF prog-id=28 op=LOAD Feb 12 19:07:58.502208 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:07:58.505012 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:07:58.506366 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:07:58.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.507649 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:07:58.511000 audit[1092]: SYSTEM_BOOT pid=1092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.514541 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:07:58.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.515642 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:07:58.517725 systemd[1]: Starting systemd-update-done.service... Feb 12 19:07:58.523684 systemd[1]: Finished systemd-update-done.service. Feb 12 19:07:58.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:07:58.541000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:07:58.541000 audit[1103]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeb07c430 a2=420 a3=0 items=0 ppid=1081 pid=1103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:07:58.541000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:07:58.542680 augenrules[1103]: No rules Feb 12 19:07:58.543173 systemd[1]: Finished audit-rules.service. Feb 12 19:07:58.552908 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:07:58.553606 systemd-timesyncd[1091]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:07:58.553657 systemd-timesyncd[1091]: Initial clock synchronization to Mon 2024-02-12 19:07:58.337301 UTC. Feb 12 19:07:58.553970 systemd[1]: Reached target time-set.target. Feb 12 19:07:58.555656 systemd-resolved[1085]: Positive Trust Anchors: Feb 12 19:07:58.555863 systemd-resolved[1085]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:07:58.555936 systemd-resolved[1085]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:07:58.565540 systemd-resolved[1085]: Defaulting to hostname 'linux'. Feb 12 19:07:58.567015 systemd[1]: Started systemd-resolved.service. Feb 12 19:07:58.567690 systemd[1]: Reached target network.target. Feb 12 19:07:58.568222 systemd[1]: Reached target nss-lookup.target. Feb 12 19:07:58.568811 systemd[1]: Reached target sysinit.target. Feb 12 19:07:58.569424 systemd[1]: Started motdgen.path. Feb 12 19:07:58.569945 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:07:58.570858 systemd[1]: Started logrotate.timer. Feb 12 19:07:58.571620 systemd[1]: Started mdadm.timer. Feb 12 19:07:58.572217 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:07:58.573019 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:07:58.573052 systemd[1]: Reached target paths.target. Feb 12 19:07:58.573734 systemd[1]: Reached target timers.target. Feb 12 19:07:58.574819 systemd[1]: Listening on dbus.socket. Feb 12 19:07:58.576527 systemd[1]: Starting docker.socket... Feb 12 19:07:58.579412 systemd[1]: Listening on sshd.socket. Feb 12 19:07:58.580143 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:07:58.580575 systemd[1]: Listening on docker.socket. Feb 12 19:07:58.581340 systemd[1]: Reached target sockets.target. Feb 12 19:07:58.582084 systemd[1]: Reached target basic.target. Feb 12 19:07:58.582810 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:07:58.582835 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:07:58.583804 systemd[1]: Starting containerd.service... Feb 12 19:07:58.585423 systemd[1]: Starting dbus.service... Feb 12 19:07:58.586948 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:07:58.588732 systemd[1]: Starting extend-filesystems.service... Feb 12 19:07:58.589517 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:07:58.590801 systemd[1]: Starting motdgen.service... Feb 12 19:07:58.592545 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:07:58.596267 systemd[1]: Starting prepare-critools.service... Feb 12 19:07:58.597982 systemd[1]: Starting prepare-helm.service... Feb 12 19:07:58.599696 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:07:58.600456 jq[1113]: false Feb 12 19:07:58.601960 systemd[1]: Starting sshd-keygen.service... Feb 12 19:07:58.606142 systemd[1]: Starting systemd-logind.service... Feb 12 19:07:58.607003 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:07:58.607081 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:07:58.607528 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:07:58.610113 systemd[1]: Starting update-engine.service... Feb 12 19:07:58.611788 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:07:58.614213 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:07:58.614755 jq[1133]: true Feb 12 19:07:58.614381 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:07:58.618210 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:07:58.618360 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:07:58.622717 dbus-daemon[1112]: [system] SELinux support is enabled Feb 12 19:07:58.622933 systemd[1]: Started dbus.service. Feb 12 19:07:58.631515 tar[1135]: ./ Feb 12 19:07:58.631515 tar[1135]: ./loopback Feb 12 19:07:58.631808 jq[1142]: true Feb 12 19:07:58.631871 tar[1138]: linux-arm64/helm Feb 12 19:07:58.626081 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:07:58.632046 tar[1136]: crictl Feb 12 19:07:58.626112 systemd[1]: Reached target system-config.target. Feb 12 19:07:58.626840 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:07:58.626869 systemd[1]: Reached target user-config.target. Feb 12 19:07:58.640863 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:07:58.641013 systemd[1]: Finished motdgen.service. Feb 12 19:07:58.642714 extend-filesystems[1114]: Found vda Feb 12 19:07:58.642714 extend-filesystems[1114]: Found vda1 Feb 12 19:07:58.642714 extend-filesystems[1114]: Found vda2 Feb 12 19:07:58.642714 extend-filesystems[1114]: Found vda3 Feb 12 19:07:58.642714 extend-filesystems[1114]: Found usr Feb 12 19:07:58.642714 extend-filesystems[1114]: Found vda4 Feb 12 19:07:58.642714 extend-filesystems[1114]: Found vda6 Feb 12 19:07:58.642714 extend-filesystems[1114]: Found vda7 Feb 12 19:07:58.642714 extend-filesystems[1114]: Found vda9 Feb 12 19:07:58.642714 extend-filesystems[1114]: Checking size of /dev/vda9 Feb 12 19:07:58.675810 systemd-logind[1126]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 19:07:58.679096 extend-filesystems[1114]: Resized partition /dev/vda9 Feb 12 19:07:58.682876 extend-filesystems[1164]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:07:58.682930 systemd-logind[1126]: New seat seat0. Feb 12 19:07:58.687591 systemd[1]: Started systemd-logind.service. Feb 12 19:07:58.689385 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:07:58.699606 update_engine[1131]: I0212 19:07:58.699349 1131 main.cc:92] Flatcar Update Engine starting Feb 12 19:07:58.705420 systemd[1]: Started update-engine.service. Feb 12 19:07:58.705558 update_engine[1131]: I0212 19:07:58.705456 1131 update_check_scheduler.cc:74] Next update check in 7m57s Feb 12 19:07:58.707773 systemd[1]: Started locksmithd.service. Feb 12 19:07:58.738392 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:07:58.757457 extend-filesystems[1164]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:07:58.757457 extend-filesystems[1164]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:07:58.757457 extend-filesystems[1164]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:07:58.764188 extend-filesystems[1114]: Resized filesystem in /dev/vda9 Feb 12 19:07:58.765086 tar[1135]: ./bandwidth Feb 12 19:07:58.761641 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:07:58.761796 systemd[1]: Finished extend-filesystems.service. Feb 12 19:07:58.765865 bash[1162]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:07:58.766366 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:07:58.780623 env[1143]: time="2024-02-12T19:07:58.778364120Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:07:58.813253 env[1143]: time="2024-02-12T19:07:58.813157760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:07:58.813356 env[1143]: time="2024-02-12T19:07:58.813322040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:07:58.819870 tar[1135]: ./ptp Feb 12 19:07:58.837626 env[1143]: time="2024-02-12T19:07:58.837562400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:07:58.837626 env[1143]: time="2024-02-12T19:07:58.837606360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:07:58.837904 env[1143]: time="2024-02-12T19:07:58.837839080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:07:58.837904 env[1143]: time="2024-02-12T19:07:58.837860920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:07:58.837904 env[1143]: time="2024-02-12T19:07:58.837875960Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:07:58.837904 env[1143]: time="2024-02-12T19:07:58.837885360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:07:58.838028 env[1143]: time="2024-02-12T19:07:58.837957000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:07:58.838202 env[1143]: time="2024-02-12T19:07:58.838180560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:07:58.838313 env[1143]: time="2024-02-12T19:07:58.838293080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:07:58.838350 env[1143]: time="2024-02-12T19:07:58.838313080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:07:58.838395 env[1143]: time="2024-02-12T19:07:58.838365200Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:07:58.838395 env[1143]: time="2024-02-12T19:07:58.838388960Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:07:58.842045 env[1143]: time="2024-02-12T19:07:58.842009920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:07:58.842045 env[1143]: time="2024-02-12T19:07:58.842046400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:07:58.842165 env[1143]: time="2024-02-12T19:07:58.842060720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:07:58.842165 env[1143]: time="2024-02-12T19:07:58.842091160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:07:58.842165 env[1143]: time="2024-02-12T19:07:58.842105840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:07:58.842165 env[1143]: time="2024-02-12T19:07:58.842121440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:07:58.842165 env[1143]: time="2024-02-12T19:07:58.842135600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:07:58.842502 env[1143]: time="2024-02-12T19:07:58.842478400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:07:58.842557 env[1143]: time="2024-02-12T19:07:58.842504560Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:07:58.842557 env[1143]: time="2024-02-12T19:07:58.842519240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:07:58.842557 env[1143]: time="2024-02-12T19:07:58.842531880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:07:58.842557 env[1143]: time="2024-02-12T19:07:58.842543960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:07:58.842697 env[1143]: time="2024-02-12T19:07:58.842674320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:07:58.842772 env[1143]: time="2024-02-12T19:07:58.842752680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:07:58.843016 env[1143]: time="2024-02-12T19:07:58.842992360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:07:58.843063 env[1143]: time="2024-02-12T19:07:58.843020240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843063 env[1143]: time="2024-02-12T19:07:58.843033240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:07:58.843162 env[1143]: time="2024-02-12T19:07:58.843143000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843162 env[1143]: time="2024-02-12T19:07:58.843159640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843213 env[1143]: time="2024-02-12T19:07:58.843173760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843213 env[1143]: time="2024-02-12T19:07:58.843185520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843213 env[1143]: time="2024-02-12T19:07:58.843197360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843213 env[1143]: time="2024-02-12T19:07:58.843209360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843290 env[1143]: time="2024-02-12T19:07:58.843220400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843290 env[1143]: time="2024-02-12T19:07:58.843231640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843290 env[1143]: time="2024-02-12T19:07:58.843243880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:07:58.843425 env[1143]: time="2024-02-12T19:07:58.843353520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843425 env[1143]: time="2024-02-12T19:07:58.843394800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843425 env[1143]: time="2024-02-12T19:07:58.843409680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843425 env[1143]: time="2024-02-12T19:07:58.843421360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:07:58.843534 env[1143]: time="2024-02-12T19:07:58.843435080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:07:58.843534 env[1143]: time="2024-02-12T19:07:58.843447440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:07:58.843534 env[1143]: time="2024-02-12T19:07:58.843464320Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:07:58.843534 env[1143]: time="2024-02-12T19:07:58.843506320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:07:58.843775 env[1143]: time="2024-02-12T19:07:58.843719760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:07:58.846208 env[1143]: time="2024-02-12T19:07:58.843776240Z" level=info msg="Connect containerd service" Feb 12 19:07:58.846208 env[1143]: time="2024-02-12T19:07:58.843805720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:07:58.846208 env[1143]: time="2024-02-12T19:07:58.844489960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:07:58.846208 env[1143]: time="2024-02-12T19:07:58.844846000Z" level=info msg="Start subscribing containerd event" Feb 12 19:07:58.846208 env[1143]: time="2024-02-12T19:07:58.844900720Z" level=info msg="Start recovering state" Feb 12 19:07:58.846208 env[1143]: time="2024-02-12T19:07:58.844959080Z" level=info msg="Start event monitor" Feb 12 19:07:58.846208 env[1143]: time="2024-02-12T19:07:58.844978840Z" level=info msg="Start snapshots syncer" Feb 12 19:07:58.846208 env[1143]: time="2024-02-12T19:07:58.844988920Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:07:58.846208 env[1143]: time="2024-02-12T19:07:58.844996080Z" level=info msg="Start streaming server" Feb 12 19:07:58.846208 env[1143]: time="2024-02-12T19:07:58.844875000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:07:58.846208 env[1143]: time="2024-02-12T19:07:58.845112360Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:07:58.845239 systemd[1]: Started containerd.service. Feb 12 19:07:58.855493 tar[1135]: ./vlan Feb 12 19:07:58.856536 env[1143]: time="2024-02-12T19:07:58.856496880Z" level=info msg="containerd successfully booted in 0.078911s" Feb 12 19:07:58.884591 tar[1135]: ./host-device Feb 12 19:07:58.912305 tar[1135]: ./tuning Feb 12 19:07:58.937300 tar[1135]: ./vrf Feb 12 19:07:58.963430 tar[1135]: ./sbr Feb 12 19:07:58.988790 tar[1135]: ./tap Feb 12 19:07:59.017483 tar[1135]: ./dhcp Feb 12 19:07:59.087008 tar[1135]: ./static Feb 12 19:07:59.107430 tar[1135]: ./firewall Feb 12 19:07:59.130431 tar[1138]: linux-arm64/LICENSE Feb 12 19:07:59.130556 tar[1138]: linux-arm64/README.md Feb 12 19:07:59.134425 systemd[1]: Finished prepare-helm.service. Feb 12 19:07:59.141042 tar[1135]: ./macvlan Feb 12 19:07:59.163843 locksmithd[1170]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:07:59.173606 tar[1135]: ./dummy Feb 12 19:07:59.189834 systemd[1]: Finished prepare-critools.service. Feb 12 19:07:59.201905 tar[1135]: ./bridge Feb 12 19:07:59.232223 tar[1135]: ./ipvlan Feb 12 19:07:59.260026 tar[1135]: ./portmap Feb 12 19:07:59.286460 tar[1135]: ./host-local Feb 12 19:07:59.318754 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:07:59.411530 systemd-networkd[1052]: eth0: Gained IPv6LL Feb 12 19:07:59.640023 sshd_keygen[1137]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:07:59.656222 systemd[1]: Finished sshd-keygen.service. Feb 12 19:07:59.658367 systemd[1]: Starting issuegen.service... Feb 12 19:07:59.662402 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:07:59.662538 systemd[1]: Finished issuegen.service. Feb 12 19:07:59.664518 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:07:59.669935 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:07:59.672005 systemd[1]: Started getty@tty1.service. Feb 12 19:07:59.673820 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:07:59.674832 systemd[1]: Reached target getty.target. Feb 12 19:07:59.675633 systemd[1]: Reached target multi-user.target. Feb 12 19:07:59.677510 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:07:59.683571 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:07:59.683708 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:07:59.684679 systemd[1]: Startup finished in 630ms (kernel) + 5.873s (initrd) + 4.233s (userspace) = 10.737s. Feb 12 19:08:02.068862 systemd[1]: Created slice system-sshd.slice. Feb 12 19:08:02.069899 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:32772.service. Feb 12 19:08:02.119823 sshd[1200]: Accepted publickey for core from 10.0.0.1 port 32772 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:08:02.121696 sshd[1200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:08:02.131927 systemd-logind[1126]: New session 1 of user core. Feb 12 19:08:02.132846 systemd[1]: Created slice user-500.slice. Feb 12 19:08:02.133906 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:08:02.141462 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:08:02.142763 systemd[1]: Starting user@500.service... Feb 12 19:08:02.145179 (systemd)[1203]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:08:02.200716 systemd[1203]: Queued start job for default target default.target. Feb 12 19:08:02.201183 systemd[1203]: Reached target paths.target. Feb 12 19:08:02.201202 systemd[1203]: Reached target sockets.target. Feb 12 19:08:02.201213 systemd[1203]: Reached target timers.target. Feb 12 19:08:02.201223 systemd[1203]: Reached target basic.target. Feb 12 19:08:02.201272 systemd[1203]: Reached target default.target. Feb 12 19:08:02.201295 systemd[1203]: Startup finished in 50ms. Feb 12 19:08:02.201516 systemd[1]: Started user@500.service. Feb 12 19:08:02.202454 systemd[1]: Started session-1.scope. Feb 12 19:08:02.252580 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:32774.service. Feb 12 19:08:02.295752 sshd[1212]: Accepted publickey for core from 10.0.0.1 port 32774 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:08:02.297575 sshd[1212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:08:02.301840 systemd[1]: Started session-2.scope. Feb 12 19:08:02.302125 systemd-logind[1126]: New session 2 of user core. Feb 12 19:08:02.354975 sshd[1212]: pam_unix(sshd:session): session closed for user core Feb 12 19:08:02.358075 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:32774.service: Deactivated successfully. Feb 12 19:08:02.358659 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:08:02.359093 systemd-logind[1126]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:08:02.360411 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:32784.service. Feb 12 19:08:02.361004 systemd-logind[1126]: Removed session 2. Feb 12 19:08:02.402487 sshd[1218]: Accepted publickey for core from 10.0.0.1 port 32784 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:08:02.403723 sshd[1218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:08:02.407068 systemd-logind[1126]: New session 3 of user core. Feb 12 19:08:02.407966 systemd[1]: Started session-3.scope. Feb 12 19:08:02.455125 sshd[1218]: pam_unix(sshd:session): session closed for user core Feb 12 19:08:02.458683 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:32794.service. Feb 12 19:08:02.459157 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:32784.service: Deactivated successfully. Feb 12 19:08:02.459747 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:08:02.460166 systemd-logind[1126]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:08:02.460776 systemd-logind[1126]: Removed session 3. Feb 12 19:08:02.500665 sshd[1223]: Accepted publickey for core from 10.0.0.1 port 32794 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:08:02.501834 sshd[1223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:08:02.505064 systemd-logind[1126]: New session 4 of user core. Feb 12 19:08:02.505865 systemd[1]: Started session-4.scope. Feb 12 19:08:02.557205 sshd[1223]: pam_unix(sshd:session): session closed for user core Feb 12 19:08:02.560960 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:32794.service: Deactivated successfully. Feb 12 19:08:02.561494 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:08:02.561960 systemd-logind[1126]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:08:02.562941 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:32796.service. Feb 12 19:08:02.563495 systemd-logind[1126]: Removed session 4. Feb 12 19:08:02.604519 sshd[1230]: Accepted publickey for core from 10.0.0.1 port 32796 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:08:02.605963 sshd[1230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:08:02.609782 systemd-logind[1126]: New session 5 of user core. Feb 12 19:08:02.610592 systemd[1]: Started session-5.scope. Feb 12 19:08:02.666234 sudo[1233]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:08:02.666462 sudo[1233]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:08:03.226618 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:08:03.233661 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:08:03.234469 systemd[1]: Reached target network-online.target. Feb 12 19:08:03.235810 systemd[1]: Starting docker.service... Feb 12 19:08:03.309053 env[1250]: time="2024-02-12T19:08:03.308974960Z" level=info msg="Starting up" Feb 12 19:08:03.310614 env[1250]: time="2024-02-12T19:08:03.310589882Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:08:03.310614 env[1250]: time="2024-02-12T19:08:03.310610550Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:08:03.310707 env[1250]: time="2024-02-12T19:08:03.310631492Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:08:03.310707 env[1250]: time="2024-02-12T19:08:03.310641452Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:08:03.312754 env[1250]: time="2024-02-12T19:08:03.312729321Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:08:03.312754 env[1250]: time="2024-02-12T19:08:03.312749831Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:08:03.312852 env[1250]: time="2024-02-12T19:08:03.312764829Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:08:03.312852 env[1250]: time="2024-02-12T19:08:03.312774277Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:08:03.316450 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport751085072-merged.mount: Deactivated successfully. Feb 12 19:08:03.541993 env[1250]: time="2024-02-12T19:08:03.541886592Z" level=info msg="Loading containers: start." Feb 12 19:08:03.645386 kernel: Initializing XFRM netlink socket Feb 12 19:08:03.667423 env[1250]: time="2024-02-12T19:08:03.667348583Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:08:03.716351 systemd-networkd[1052]: docker0: Link UP Feb 12 19:08:03.724420 env[1250]: time="2024-02-12T19:08:03.724367041Z" level=info msg="Loading containers: done." Feb 12 19:08:03.745326 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2451884078-merged.mount: Deactivated successfully. Feb 12 19:08:03.747292 env[1250]: time="2024-02-12T19:08:03.747258369Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:08:03.747585 env[1250]: time="2024-02-12T19:08:03.747567670Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:08:03.747763 env[1250]: time="2024-02-12T19:08:03.747747495Z" level=info msg="Daemon has completed initialization" Feb 12 19:08:03.762060 systemd[1]: Started docker.service. Feb 12 19:08:03.768979 env[1250]: time="2024-02-12T19:08:03.768855491Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:08:03.785004 systemd[1]: Reloading. Feb 12 19:08:03.823591 /usr/lib/systemd/system-generators/torcx-generator[1396]: time="2024-02-12T19:08:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:08:03.824130 /usr/lib/systemd/system-generators/torcx-generator[1396]: time="2024-02-12T19:08:03Z" level=info msg="torcx already run" Feb 12 19:08:03.875733 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:08:03.875752 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:08:03.890812 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:08:03.951292 systemd[1]: Started kubelet.service. Feb 12 19:08:04.112942 kubelet[1431]: E0212 19:08:04.112820 1431 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:08:04.115249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:08:04.115383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:08:04.415799 env[1143]: time="2024-02-12T19:08:04.415688412Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 12 19:08:05.152066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1044705244.mount: Deactivated successfully. Feb 12 19:08:07.112832 env[1143]: time="2024-02-12T19:08:07.112784610Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:07.114327 env[1143]: time="2024-02-12T19:08:07.114297508Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:07.116039 env[1143]: time="2024-02-12T19:08:07.116011718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:07.119816 env[1143]: time="2024-02-12T19:08:07.119784869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:07.120686 env[1143]: time="2024-02-12T19:08:07.120654833Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa\"" Feb 12 19:08:07.131320 env[1143]: time="2024-02-12T19:08:07.131289978Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 12 19:08:09.561433 env[1143]: time="2024-02-12T19:08:09.561355639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:09.563183 env[1143]: time="2024-02-12T19:08:09.563145502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:09.564940 env[1143]: time="2024-02-12T19:08:09.564908278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:09.566809 env[1143]: time="2024-02-12T19:08:09.566769668Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:09.567601 env[1143]: time="2024-02-12T19:08:09.567563781Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95\"" Feb 12 19:08:09.576641 env[1143]: time="2024-02-12T19:08:09.576576558Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 12 19:08:10.961039 env[1143]: time="2024-02-12T19:08:10.960990637Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:10.962442 env[1143]: time="2024-02-12T19:08:10.962415556Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:10.964511 env[1143]: time="2024-02-12T19:08:10.964481860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:10.967789 env[1143]: time="2024-02-12T19:08:10.967742168Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:10.968562 env[1143]: time="2024-02-12T19:08:10.968531666Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb\"" Feb 12 19:08:10.979248 env[1143]: time="2024-02-12T19:08:10.979200155Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 12 19:08:12.011471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2285751575.mount: Deactivated successfully. Feb 12 19:08:12.400928 env[1143]: time="2024-02-12T19:08:12.400809231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:12.402472 env[1143]: time="2024-02-12T19:08:12.402426571Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:12.404406 env[1143]: time="2024-02-12T19:08:12.404355817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:12.405269 env[1143]: time="2024-02-12T19:08:12.405235287Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:12.406292 env[1143]: time="2024-02-12T19:08:12.406249354Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 12 19:08:12.415668 env[1143]: time="2024-02-12T19:08:12.415625358Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:08:12.871151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202852336.mount: Deactivated successfully. Feb 12 19:08:12.874808 env[1143]: time="2024-02-12T19:08:12.874752044Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:12.876742 env[1143]: time="2024-02-12T19:08:12.876701116Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:12.878654 env[1143]: time="2024-02-12T19:08:12.878621803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:12.879887 env[1143]: time="2024-02-12T19:08:12.879856215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:12.880492 env[1143]: time="2024-02-12T19:08:12.880458930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 12 19:08:12.889350 env[1143]: time="2024-02-12T19:08:12.889303797Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 12 19:08:13.620053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28452674.mount: Deactivated successfully. Feb 12 19:08:14.182798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:08:14.182971 systemd[1]: Stopped kubelet.service. Feb 12 19:08:14.184442 systemd[1]: Started kubelet.service. Feb 12 19:08:14.226188 kubelet[1489]: E0212 19:08:14.226128 1489 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:08:14.228873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:08:14.229008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:08:15.697564 env[1143]: time="2024-02-12T19:08:15.697515661Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:15.699350 env[1143]: time="2024-02-12T19:08:15.699316536Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:15.701451 env[1143]: time="2024-02-12T19:08:15.701422315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:15.703265 env[1143]: time="2024-02-12T19:08:15.703236467Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:15.703928 env[1143]: time="2024-02-12T19:08:15.703898627Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737\"" Feb 12 19:08:15.712634 env[1143]: time="2024-02-12T19:08:15.712601766Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 12 19:08:16.265991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1522856076.mount: Deactivated successfully. Feb 12 19:08:16.911853 env[1143]: time="2024-02-12T19:08:16.911808185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:16.913231 env[1143]: time="2024-02-12T19:08:16.913184930Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:16.914607 env[1143]: time="2024-02-12T19:08:16.914568855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:16.915901 env[1143]: time="2024-02-12T19:08:16.915877351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:16.916317 env[1143]: time="2024-02-12T19:08:16.916279665Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 12 19:08:23.307871 systemd[1]: Stopped kubelet.service. Feb 12 19:08:23.322993 systemd[1]: Reloading. Feb 12 19:08:23.363744 /usr/lib/systemd/system-generators/torcx-generator[1600]: time="2024-02-12T19:08:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:08:23.363773 /usr/lib/systemd/system-generators/torcx-generator[1600]: time="2024-02-12T19:08:23Z" level=info msg="torcx already run" Feb 12 19:08:23.421473 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:08:23.421491 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:08:23.436684 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:08:23.504150 systemd[1]: Started kubelet.service. Feb 12 19:08:23.549863 kubelet[1637]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:08:23.550194 kubelet[1637]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:08:23.550194 kubelet[1637]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:08:23.550194 kubelet[1637]: I0212 19:08:23.550137 1637 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:08:24.394713 kubelet[1637]: I0212 19:08:24.394673 1637 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 19:08:24.394713 kubelet[1637]: I0212 19:08:24.394702 1637 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:08:24.394917 kubelet[1637]: I0212 19:08:24.394893 1637 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 19:08:24.400956 kubelet[1637]: I0212 19:08:24.400932 1637 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:08:24.401317 kubelet[1637]: E0212 19:08:24.401301 1637 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.15:6443: connect: connection refused Feb 12 19:08:24.402568 kubelet[1637]: W0212 19:08:24.402546 1637 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:08:24.403210 kubelet[1637]: I0212 19:08:24.403183 1637 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:08:24.403403 kubelet[1637]: I0212 19:08:24.403388 1637 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:08:24.403477 kubelet[1637]: I0212 19:08:24.403454 1637 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:08:24.403477 kubelet[1637]: I0212 19:08:24.403477 1637 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:08:24.403601 kubelet[1637]: I0212 19:08:24.403487 1637 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 19:08:24.403601 kubelet[1637]: I0212 19:08:24.403583 1637 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:08:24.406598 kubelet[1637]: I0212 19:08:24.406566 1637 kubelet.go:405] "Attempting to sync node with API server" Feb 12 19:08:24.406598 kubelet[1637]: I0212 19:08:24.406589 1637 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:08:24.406692 kubelet[1637]: I0212 19:08:24.406611 1637 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:08:24.406692 kubelet[1637]: I0212 19:08:24.406625 1637 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:08:24.407118 kubelet[1637]: W0212 19:08:24.407072 1637 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Feb 12 19:08:24.407175 kubelet[1637]: E0212 19:08:24.407134 1637 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Feb 12 19:08:24.407394 kubelet[1637]: I0212 19:08:24.407364 1637 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:08:24.407623 kubelet[1637]: W0212 19:08:24.407583 1637 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Feb 12 19:08:24.407623 kubelet[1637]: E0212 19:08:24.407625 1637 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Feb 12 19:08:24.407879 kubelet[1637]: W0212 19:08:24.407857 1637 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:08:24.408747 kubelet[1637]: I0212 19:08:24.408727 1637 server.go:1168] "Started kubelet" Feb 12 19:08:24.408840 kubelet[1637]: I0212 19:08:24.408818 1637 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:08:24.409185 kubelet[1637]: I0212 19:08:24.409162 1637 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:08:24.409414 kubelet[1637]: I0212 19:08:24.409394 1637 server.go:461] "Adding debug handlers to kubelet server" Feb 12 19:08:24.410392 kubelet[1637]: E0212 19:08:24.410167 1637 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33321e728ef5d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 8, 24, 408706909, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 8, 24, 408706909, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.15:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.15:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:08:24.410630 kubelet[1637]: E0212 19:08:24.410577 1637 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:08:24.410630 kubelet[1637]: E0212 19:08:24.410600 1637 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:08:24.413042 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:08:24.413170 kubelet[1637]: I0212 19:08:24.413142 1637 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:08:24.413740 kubelet[1637]: E0212 19:08:24.413704 1637 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:08:24.413809 kubelet[1637]: I0212 19:08:24.413754 1637 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 19:08:24.413882 kubelet[1637]: I0212 19:08:24.413865 1637 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 19:08:24.415273 kubelet[1637]: E0212 19:08:24.415220 1637 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Feb 12 19:08:24.415367 kubelet[1637]: W0212 19:08:24.415323 1637 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Feb 12 19:08:24.415419 kubelet[1637]: E0212 19:08:24.415388 1637 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Feb 12 19:08:24.426177 kubelet[1637]: I0212 19:08:24.426146 1637 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:08:24.427070 kubelet[1637]: I0212 19:08:24.427043 1637 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:08:24.427070 kubelet[1637]: I0212 19:08:24.427070 1637 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 19:08:24.427160 kubelet[1637]: I0212 19:08:24.427090 1637 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 19:08:24.427160 kubelet[1637]: E0212 19:08:24.427156 1637 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:08:24.430936 kubelet[1637]: W0212 19:08:24.430881 1637 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Feb 12 19:08:24.430936 kubelet[1637]: E0212 19:08:24.430937 1637 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Feb 12 19:08:24.434549 kubelet[1637]: I0212 19:08:24.434528 1637 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:08:24.434632 kubelet[1637]: I0212 19:08:24.434559 1637 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:08:24.434632 kubelet[1637]: I0212 19:08:24.434578 1637 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:08:24.436293 kubelet[1637]: I0212 19:08:24.436261 1637 policy_none.go:49] "None policy: Start" Feb 12 19:08:24.436761 kubelet[1637]: I0212 19:08:24.436738 1637 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:08:24.436761 kubelet[1637]: I0212 19:08:24.436762 1637 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:08:24.441880 systemd[1]: Created slice kubepods.slice. Feb 12 19:08:24.445577 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:08:24.447874 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:08:24.448765 kubelet[1637]: W0212 19:08:24.448742 1637 helpers.go:242] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Feb 12 19:08:24.458082 kubelet[1637]: I0212 19:08:24.458059 1637 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:08:24.458461 kubelet[1637]: I0212 19:08:24.458443 1637 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:08:24.458932 kubelet[1637]: E0212 19:08:24.458915 1637 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 19:08:24.516020 kubelet[1637]: I0212 19:08:24.515986 1637 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:08:24.516335 kubelet[1637]: E0212 19:08:24.516322 1637 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Feb 12 19:08:24.527469 kubelet[1637]: I0212 19:08:24.527446 1637 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:08:24.528599 kubelet[1637]: I0212 19:08:24.528575 1637 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:08:24.529710 kubelet[1637]: I0212 19:08:24.529686 1637 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:08:24.534110 systemd[1]: Created slice kubepods-burstable-pod5c775f7345d65e52da1951507de234fe.slice. Feb 12 19:08:24.548612 systemd[1]: Created slice kubepods-burstable-pod7709ea05d7cdf82b0d7e594b61a10331.slice. Feb 12 19:08:24.559971 systemd[1]: Created slice kubepods-burstable-pod2b0e94b38682f4e439413801d3cc54db.slice. Feb 12 19:08:24.615897 kubelet[1637]: E0212 19:08:24.615865 1637 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Feb 12 19:08:24.616361 kubelet[1637]: I0212 19:08:24.616342 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c775f7345d65e52da1951507de234fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5c775f7345d65e52da1951507de234fe\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:08:24.616559 kubelet[1637]: I0212 19:08:24.616524 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:08:24.616704 kubelet[1637]: I0212 19:08:24.616691 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:08:24.616910 kubelet[1637]: I0212 19:08:24.616896 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:08:24.617075 kubelet[1637]: I0212 19:08:24.617062 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:08:24.617199 kubelet[1637]: I0212 19:08:24.617188 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:08:24.617306 kubelet[1637]: I0212 19:08:24.617296 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c775f7345d65e52da1951507de234fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c775f7345d65e52da1951507de234fe\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:08:24.617425 kubelet[1637]: I0212 19:08:24.617413 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c775f7345d65e52da1951507de234fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c775f7345d65e52da1951507de234fe\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:08:24.617527 kubelet[1637]: I0212 19:08:24.617518 1637 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:08:24.718294 kubelet[1637]: I0212 19:08:24.718269 1637 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:08:24.718651 kubelet[1637]: E0212 19:08:24.718617 1637 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Feb 12 19:08:24.849237 kubelet[1637]: E0212 19:08:24.849202 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:24.849888 env[1143]: time="2024-02-12T19:08:24.849841584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5c775f7345d65e52da1951507de234fe,Namespace:kube-system,Attempt:0,}" Feb 12 19:08:24.858462 kubelet[1637]: E0212 19:08:24.858442 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:24.858933 env[1143]: time="2024-02-12T19:08:24.858891573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,}" Feb 12 19:08:24.862505 kubelet[1637]: E0212 19:08:24.862477 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:24.862937 env[1143]: time="2024-02-12T19:08:24.862899763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,}" Feb 12 19:08:25.017193 kubelet[1637]: E0212 19:08:25.017099 1637 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Feb 12 19:08:25.120641 kubelet[1637]: I0212 19:08:25.120607 1637 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:08:25.120977 kubelet[1637]: E0212 19:08:25.120961 1637 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Feb 12 19:08:25.323070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898304393.mount: Deactivated successfully. Feb 12 19:08:25.328990 env[1143]: time="2024-02-12T19:08:25.328946866Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.330270 env[1143]: time="2024-02-12T19:08:25.330243537Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.331422 env[1143]: time="2024-02-12T19:08:25.331390093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.332242 env[1143]: time="2024-02-12T19:08:25.332213282Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.334523 env[1143]: time="2024-02-12T19:08:25.334491887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.336722 env[1143]: time="2024-02-12T19:08:25.336689721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.339692 env[1143]: time="2024-02-12T19:08:25.339646956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.342843 env[1143]: time="2024-02-12T19:08:25.342812416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.343745 env[1143]: time="2024-02-12T19:08:25.343720214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.344664 env[1143]: time="2024-02-12T19:08:25.344637603Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.345464 env[1143]: time="2024-02-12T19:08:25.345438770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.346235 env[1143]: time="2024-02-12T19:08:25.346196453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:25.370487 env[1143]: time="2024-02-12T19:08:25.370386009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:08:25.370487 env[1143]: time="2024-02-12T19:08:25.370430611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:08:25.370487 env[1143]: time="2024-02-12T19:08:25.370441083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:08:25.370671 env[1143]: time="2024-02-12T19:08:25.370632282Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56a8727837cec084627c36b859ddf939f3237c8bf929f293801072d3d4156d96 pid=1683 runtime=io.containerd.runc.v2 Feb 12 19:08:25.371187 env[1143]: time="2024-02-12T19:08:25.370923317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:08:25.371187 env[1143]: time="2024-02-12T19:08:25.370951414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:08:25.371187 env[1143]: time="2024-02-12T19:08:25.370961245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:08:25.371187 env[1143]: time="2024-02-12T19:08:25.371163915Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5db7729880adc419eb7c81988050ec0422f3c87ed03639c496e0a92693b1e3f0 pid=1684 runtime=io.containerd.runc.v2 Feb 12 19:08:25.374803 env[1143]: time="2024-02-12T19:08:25.374722485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:08:25.374886 env[1143]: time="2024-02-12T19:08:25.374824879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:08:25.374886 env[1143]: time="2024-02-12T19:08:25.374853096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:08:25.375077 env[1143]: time="2024-02-12T19:08:25.375034023Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/55a0a09fc6d38d85a8d4cae3c33d9797e0026ccc24e3dab48df5b6c789e5311d pid=1711 runtime=io.containerd.runc.v2 Feb 12 19:08:25.387715 systemd[1]: Started cri-containerd-56a8727837cec084627c36b859ddf939f3237c8bf929f293801072d3d4156d96.scope. Feb 12 19:08:25.388810 systemd[1]: Started cri-containerd-5db7729880adc419eb7c81988050ec0422f3c87ed03639c496e0a92693b1e3f0.scope. Feb 12 19:08:25.396879 systemd[1]: Started cri-containerd-55a0a09fc6d38d85a8d4cae3c33d9797e0026ccc24e3dab48df5b6c789e5311d.scope. Feb 12 19:08:25.411408 kubelet[1637]: W0212 19:08:25.411088 1637 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Feb 12 19:08:25.411408 kubelet[1637]: E0212 19:08:25.411143 1637 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Feb 12 19:08:25.453010 env[1143]: time="2024-02-12T19:08:25.452965984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5c775f7345d65e52da1951507de234fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"5db7729880adc419eb7c81988050ec0422f3c87ed03639c496e0a92693b1e3f0\"" Feb 12 19:08:25.455091 kubelet[1637]: E0212 19:08:25.454896 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:25.458138 env[1143]: time="2024-02-12T19:08:25.458100950Z" level=info msg="CreateContainer within sandbox \"5db7729880adc419eb7c81988050ec0422f3c87ed03639c496e0a92693b1e3f0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:08:25.471092 env[1143]: time="2024-02-12T19:08:25.470074170Z" level=info msg="CreateContainer within sandbox \"5db7729880adc419eb7c81988050ec0422f3c87ed03639c496e0a92693b1e3f0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9b6100a72c4a859865baa232b542f3a58011edda333deec95d6554586429750\"" Feb 12 19:08:25.471400 env[1143]: time="2024-02-12T19:08:25.471292346Z" level=info msg="StartContainer for \"f9b6100a72c4a859865baa232b542f3a58011edda333deec95d6554586429750\"" Feb 12 19:08:25.480456 env[1143]: time="2024-02-12T19:08:25.480409566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,} returns sandbox id \"55a0a09fc6d38d85a8d4cae3c33d9797e0026ccc24e3dab48df5b6c789e5311d\"" Feb 12 19:08:25.481270 kubelet[1637]: E0212 19:08:25.481095 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:25.482801 env[1143]: time="2024-02-12T19:08:25.482765307Z" level=info msg="CreateContainer within sandbox \"55a0a09fc6d38d85a8d4cae3c33d9797e0026ccc24e3dab48df5b6c789e5311d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:08:25.485737 env[1143]: time="2024-02-12T19:08:25.485692607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,} returns sandbox id \"56a8727837cec084627c36b859ddf939f3237c8bf929f293801072d3d4156d96\"" Feb 12 19:08:25.486524 kubelet[1637]: E0212 19:08:25.486346 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:25.488171 env[1143]: time="2024-02-12T19:08:25.488140111Z" level=info msg="CreateContainer within sandbox \"56a8727837cec084627c36b859ddf939f3237c8bf929f293801072d3d4156d96\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:08:25.494664 systemd[1]: Started cri-containerd-f9b6100a72c4a859865baa232b542f3a58011edda333deec95d6554586429750.scope. Feb 12 19:08:25.511043 env[1143]: time="2024-02-12T19:08:25.510966452Z" level=info msg="CreateContainer within sandbox \"56a8727837cec084627c36b859ddf939f3237c8bf929f293801072d3d4156d96\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0e40626dee49d9278c1f1916542af437045dccfd283d207ecf18c7def2f44ae5\"" Feb 12 19:08:25.511599 env[1143]: time="2024-02-12T19:08:25.511565788Z" level=info msg="StartContainer for \"0e40626dee49d9278c1f1916542af437045dccfd283d207ecf18c7def2f44ae5\"" Feb 12 19:08:25.512960 env[1143]: time="2024-02-12T19:08:25.512923448Z" level=info msg="CreateContainer within sandbox \"55a0a09fc6d38d85a8d4cae3c33d9797e0026ccc24e3dab48df5b6c789e5311d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"520cc5ef7fa917fd9c0282f5a5d2953126b6ec01e2cb034484cce74ee8d3341e\"" Feb 12 19:08:25.513469 env[1143]: time="2024-02-12T19:08:25.513437895Z" level=info msg="StartContainer for \"520cc5ef7fa917fd9c0282f5a5d2953126b6ec01e2cb034484cce74ee8d3341e\"" Feb 12 19:08:25.531795 systemd[1]: Started cri-containerd-520cc5ef7fa917fd9c0282f5a5d2953126b6ec01e2cb034484cce74ee8d3341e.scope. Feb 12 19:08:25.536003 systemd[1]: Started cri-containerd-0e40626dee49d9278c1f1916542af437045dccfd283d207ecf18c7def2f44ae5.scope. Feb 12 19:08:25.554602 env[1143]: time="2024-02-12T19:08:25.554538802Z" level=info msg="StartContainer for \"f9b6100a72c4a859865baa232b542f3a58011edda333deec95d6554586429750\" returns successfully" Feb 12 19:08:25.616270 env[1143]: time="2024-02-12T19:08:25.616175015Z" level=info msg="StartContainer for \"520cc5ef7fa917fd9c0282f5a5d2953126b6ec01e2cb034484cce74ee8d3341e\" returns successfully" Feb 12 19:08:25.616596 env[1143]: time="2024-02-12T19:08:25.616568005Z" level=info msg="StartContainer for \"0e40626dee49d9278c1f1916542af437045dccfd283d207ecf18c7def2f44ae5\" returns successfully" Feb 12 19:08:25.923007 kubelet[1637]: I0212 19:08:25.922911 1637 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:08:26.437993 kubelet[1637]: E0212 19:08:26.437960 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:26.441257 kubelet[1637]: E0212 19:08:26.441230 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:26.441997 kubelet[1637]: E0212 19:08:26.441980 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:27.443684 kubelet[1637]: E0212 19:08:27.443655 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:28.291936 kubelet[1637]: I0212 19:08:28.291897 1637 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:08:28.413681 kubelet[1637]: I0212 19:08:28.413637 1637 apiserver.go:52] "Watching apiserver" Feb 12 19:08:28.448719 kubelet[1637]: E0212 19:08:28.448682 1637 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 12 19:08:28.449251 kubelet[1637]: E0212 19:08:28.449219 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:28.515079 kubelet[1637]: I0212 19:08:28.515046 1637 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 19:08:28.539736 kubelet[1637]: I0212 19:08:28.539709 1637 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:08:30.615147 kubelet[1637]: E0212 19:08:30.615112 1637 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:31.111804 systemd[1]: Reloading. Feb 12 19:08:31.175669 /usr/lib/systemd/system-generators/torcx-generator[1929]: time="2024-02-12T19:08:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:08:31.175702 /usr/lib/systemd/system-generators/torcx-generator[1929]: time="2024-02-12T19:08:31Z" level=info msg="torcx already run" Feb 12 19:08:31.238907 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:08:31.238927 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:08:31.254302 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:08:31.336052 systemd[1]: Stopping kubelet.service... Feb 12 19:08:31.355786 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:08:31.355996 systemd[1]: Stopped kubelet.service. Feb 12 19:08:31.356048 systemd[1]: kubelet.service: Consumed 1.202s CPU time. Feb 12 19:08:31.357909 systemd[1]: Started kubelet.service. Feb 12 19:08:31.428804 kubelet[1968]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:08:31.428804 kubelet[1968]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:08:31.428804 kubelet[1968]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:08:31.429113 kubelet[1968]: I0212 19:08:31.428781 1968 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:08:31.433056 kubelet[1968]: I0212 19:08:31.433021 1968 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 19:08:31.433056 kubelet[1968]: I0212 19:08:31.433055 1968 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:08:31.433271 kubelet[1968]: I0212 19:08:31.433256 1968 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 19:08:31.434827 kubelet[1968]: I0212 19:08:31.434802 1968 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:08:31.435758 kubelet[1968]: I0212 19:08:31.435722 1968 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:08:31.437263 kubelet[1968]: W0212 19:08:31.437248 1968 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:08:31.437995 kubelet[1968]: I0212 19:08:31.437981 1968 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:08:31.438170 kubelet[1968]: I0212 19:08:31.438161 1968 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:08:31.438240 kubelet[1968]: I0212 19:08:31.438229 1968 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:08:31.438313 kubelet[1968]: I0212 19:08:31.438249 1968 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:08:31.438313 kubelet[1968]: I0212 19:08:31.438259 1968 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 19:08:31.438313 kubelet[1968]: I0212 19:08:31.438287 1968 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:08:31.442671 kubelet[1968]: I0212 19:08:31.442642 1968 kubelet.go:405] "Attempting to sync node with API server" Feb 12 19:08:31.442671 kubelet[1968]: I0212 19:08:31.442669 1968 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:08:31.442810 kubelet[1968]: I0212 19:08:31.442694 1968 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:08:31.442810 kubelet[1968]: I0212 19:08:31.442711 1968 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.443385 1968 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.443813 1968 server.go:1168] "Started kubelet" Feb 12 19:08:31.453600 kubelet[1968]: E0212 19:08:31.444832 1968 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:08:31.453600 kubelet[1968]: E0212 19:08:31.444874 1968 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.444977 1968 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.445148 1968 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.445318 1968 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.446043 1968 server.go:461] "Adding debug handlers to kubelet server" Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.449410 1968 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.450746 1968 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.452119 1968 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.452910 1968 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.452938 1968 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 19:08:31.453600 kubelet[1968]: I0212 19:08:31.452951 1968 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 19:08:31.453600 kubelet[1968]: E0212 19:08:31.452999 1968 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:08:31.515908 kubelet[1968]: I0212 19:08:31.515878 1968 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:08:31.515908 kubelet[1968]: I0212 19:08:31.515903 1968 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:08:31.516074 kubelet[1968]: I0212 19:08:31.515922 1968 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:08:31.516098 kubelet[1968]: I0212 19:08:31.516078 1968 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:08:31.516098 kubelet[1968]: I0212 19:08:31.516092 1968 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:08:31.516098 kubelet[1968]: I0212 19:08:31.516097 1968 policy_none.go:49] "None policy: Start" Feb 12 19:08:31.516841 kubelet[1968]: I0212 19:08:31.516820 1968 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:08:31.516945 kubelet[1968]: I0212 19:08:31.516933 1968 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:08:31.517129 kubelet[1968]: I0212 19:08:31.517115 1968 state_mem.go:75] "Updated machine memory state" Feb 12 19:08:31.520566 kubelet[1968]: I0212 19:08:31.520534 1968 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:08:31.520778 kubelet[1968]: I0212 19:08:31.520752 1968 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:08:31.553639 kubelet[1968]: I0212 19:08:31.553601 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:08:31.553786 kubelet[1968]: I0212 19:08:31.553685 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:08:31.553786 kubelet[1968]: I0212 19:08:31.553771 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:08:31.558980 kubelet[1968]: I0212 19:08:31.558949 1968 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:08:31.573390 kubelet[1968]: E0212 19:08:31.573337 1968 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 19:08:31.594148 kubelet[1968]: I0212 19:08:31.594102 1968 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 19:08:31.594266 kubelet[1968]: I0212 19:08:31.594198 1968 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:08:31.752348 kubelet[1968]: I0212 19:08:31.752312 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:08:31.752548 kubelet[1968]: I0212 19:08:31.752364 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:08:31.752548 kubelet[1968]: I0212 19:08:31.752396 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:08:31.752548 kubelet[1968]: I0212 19:08:31.752417 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:08:31.752548 kubelet[1968]: I0212 19:08:31.752445 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:08:31.752548 kubelet[1968]: I0212 19:08:31.752466 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c775f7345d65e52da1951507de234fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c775f7345d65e52da1951507de234fe\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:08:31.752665 kubelet[1968]: I0212 19:08:31.752488 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c775f7345d65e52da1951507de234fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5c775f7345d65e52da1951507de234fe\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:08:31.752665 kubelet[1968]: I0212 19:08:31.752515 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:08:31.752665 kubelet[1968]: I0212 19:08:31.752544 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c775f7345d65e52da1951507de234fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c775f7345d65e52da1951507de234fe\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:08:31.874908 kubelet[1968]: E0212 19:08:31.874867 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:31.875051 kubelet[1968]: E0212 19:08:31.874982 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:31.875501 kubelet[1968]: E0212 19:08:31.875478 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:32.443839 kubelet[1968]: I0212 19:08:32.443805 1968 apiserver.go:52] "Watching apiserver" Feb 12 19:08:32.451360 kubelet[1968]: I0212 19:08:32.451330 1968 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 19:08:32.456881 kubelet[1968]: I0212 19:08:32.456852 1968 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:08:32.494085 kubelet[1968]: E0212 19:08:32.494059 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:32.494743 kubelet[1968]: E0212 19:08:32.494723 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:32.494891 kubelet[1968]: E0212 19:08:32.494837 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:32.515401 kubelet[1968]: I0212 19:08:32.515356 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5152876210000001 podCreationTimestamp="2024-02-12 19:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:08:32.512094191 +0000 UTC m=+1.145296156" watchObservedRunningTime="2024-02-12 19:08:32.515287621 +0000 UTC m=+1.148489586" Feb 12 19:08:32.529419 kubelet[1968]: I0212 19:08:32.529387 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.529337125 podCreationTimestamp="2024-02-12 19:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:08:32.521889203 +0000 UTC m=+1.155091168" watchObservedRunningTime="2024-02-12 19:08:32.529337125 +0000 UTC m=+1.162539090" Feb 12 19:08:32.536993 kubelet[1968]: I0212 19:08:32.536947 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.536912689 podCreationTimestamp="2024-02-12 19:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:08:32.529290204 +0000 UTC m=+1.162492169" watchObservedRunningTime="2024-02-12 19:08:32.536912689 +0000 UTC m=+1.170114614" Feb 12 19:08:32.816281 sudo[1233]: pam_unix(sudo:session): session closed for user root Feb 12 19:08:32.818577 sshd[1230]: pam_unix(sshd:session): session closed for user core Feb 12 19:08:32.821504 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:32796.service: Deactivated successfully. Feb 12 19:08:32.822235 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:08:32.822428 systemd[1]: session-5.scope: Consumed 7.443s CPU time. Feb 12 19:08:32.822629 systemd-logind[1126]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:08:32.823263 systemd-logind[1126]: Removed session 5. Feb 12 19:08:33.495743 kubelet[1968]: E0212 19:08:33.495706 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:33.496122 kubelet[1968]: E0212 19:08:33.495915 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:38.753599 kubelet[1968]: E0212 19:08:38.751220 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:39.503769 kubelet[1968]: E0212 19:08:39.503742 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:39.809501 kubelet[1968]: E0212 19:08:39.809405 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:40.505485 kubelet[1968]: E0212 19:08:40.505452 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:40.505636 kubelet[1968]: E0212 19:08:40.505626 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:42.514288 kubelet[1968]: E0212 19:08:42.514254 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:43.950095 update_engine[1131]: I0212 19:08:43.949921 1131 update_attempter.cc:509] Updating boot flags... Feb 12 19:08:45.893749 kubelet[1968]: I0212 19:08:45.893711 1968 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:08:45.894090 env[1143]: time="2024-02-12T19:08:45.894047779Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:08:45.894293 kubelet[1968]: I0212 19:08:45.894232 1968 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:08:46.491041 kubelet[1968]: I0212 19:08:46.491004 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:08:46.491771 kubelet[1968]: I0212 19:08:46.491730 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:08:46.496801 systemd[1]: Created slice kubepods-burstable-pod5415a516_7d38_40f3_a63f_2b47f7707bac.slice. Feb 12 19:08:46.501484 systemd[1]: Created slice kubepods-besteffort-poda6155dfa_161e_43b8_8dfe_8e2395ffb5d3.slice. Feb 12 19:08:46.559787 kubelet[1968]: I0212 19:08:46.559746 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/5415a516-7d38-40f3-a63f-2b47f7707bac-cni-plugin\") pod \"kube-flannel-ds-c8cxd\" (UID: \"5415a516-7d38-40f3-a63f-2b47f7707bac\") " pod="kube-flannel/kube-flannel-ds-c8cxd" Feb 12 19:08:46.560020 kubelet[1968]: I0212 19:08:46.560002 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6155dfa-161e-43b8-8dfe-8e2395ffb5d3-xtables-lock\") pod \"kube-proxy-dlggp\" (UID: \"a6155dfa-161e-43b8-8dfe-8e2395ffb5d3\") " pod="kube-system/kube-proxy-dlggp" Feb 12 19:08:46.560127 kubelet[1968]: I0212 19:08:46.560117 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r2qj\" (UniqueName: \"kubernetes.io/projected/a6155dfa-161e-43b8-8dfe-8e2395ffb5d3-kube-api-access-5r2qj\") pod \"kube-proxy-dlggp\" (UID: \"a6155dfa-161e-43b8-8dfe-8e2395ffb5d3\") " pod="kube-system/kube-proxy-dlggp" Feb 12 19:08:46.560221 kubelet[1968]: I0212 19:08:46.560211 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5415a516-7d38-40f3-a63f-2b47f7707bac-run\") pod \"kube-flannel-ds-c8cxd\" (UID: \"5415a516-7d38-40f3-a63f-2b47f7707bac\") " pod="kube-flannel/kube-flannel-ds-c8cxd" Feb 12 19:08:46.560324 kubelet[1968]: I0212 19:08:46.560311 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhg9f\" (UniqueName: \"kubernetes.io/projected/5415a516-7d38-40f3-a63f-2b47f7707bac-kube-api-access-zhg9f\") pod \"kube-flannel-ds-c8cxd\" (UID: \"5415a516-7d38-40f3-a63f-2b47f7707bac\") " pod="kube-flannel/kube-flannel-ds-c8cxd" Feb 12 19:08:46.560627 kubelet[1968]: I0212 19:08:46.560611 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6155dfa-161e-43b8-8dfe-8e2395ffb5d3-lib-modules\") pod \"kube-proxy-dlggp\" (UID: \"a6155dfa-161e-43b8-8dfe-8e2395ffb5d3\") " pod="kube-system/kube-proxy-dlggp" Feb 12 19:08:46.560763 kubelet[1968]: I0212 19:08:46.560749 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a6155dfa-161e-43b8-8dfe-8e2395ffb5d3-kube-proxy\") pod \"kube-proxy-dlggp\" (UID: \"a6155dfa-161e-43b8-8dfe-8e2395ffb5d3\") " pod="kube-system/kube-proxy-dlggp" Feb 12 19:08:46.560913 kubelet[1968]: I0212 19:08:46.560882 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/5415a516-7d38-40f3-a63f-2b47f7707bac-cni\") pod \"kube-flannel-ds-c8cxd\" (UID: \"5415a516-7d38-40f3-a63f-2b47f7707bac\") " pod="kube-flannel/kube-flannel-ds-c8cxd" Feb 12 19:08:46.560954 kubelet[1968]: I0212 19:08:46.560926 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/5415a516-7d38-40f3-a63f-2b47f7707bac-flannel-cfg\") pod \"kube-flannel-ds-c8cxd\" (UID: \"5415a516-7d38-40f3-a63f-2b47f7707bac\") " pod="kube-flannel/kube-flannel-ds-c8cxd" Feb 12 19:08:46.560954 kubelet[1968]: I0212 19:08:46.560948 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5415a516-7d38-40f3-a63f-2b47f7707bac-xtables-lock\") pod \"kube-flannel-ds-c8cxd\" (UID: \"5415a516-7d38-40f3-a63f-2b47f7707bac\") " pod="kube-flannel/kube-flannel-ds-c8cxd" Feb 12 19:08:46.799839 kubelet[1968]: E0212 19:08:46.799705 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:46.800665 env[1143]: time="2024-02-12T19:08:46.800504435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-c8cxd,Uid:5415a516-7d38-40f3-a63f-2b47f7707bac,Namespace:kube-flannel,Attempt:0,}" Feb 12 19:08:46.808550 kubelet[1968]: E0212 19:08:46.808512 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:46.810610 env[1143]: time="2024-02-12T19:08:46.809136204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dlggp,Uid:a6155dfa-161e-43b8-8dfe-8e2395ffb5d3,Namespace:kube-system,Attempt:0,}" Feb 12 19:08:46.819401 env[1143]: time="2024-02-12T19:08:46.819293988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:08:46.819401 env[1143]: time="2024-02-12T19:08:46.819335989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:08:46.819401 env[1143]: time="2024-02-12T19:08:46.819346629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:08:46.822455 env[1143]: time="2024-02-12T19:08:46.819717913Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fade1c204dc689656b330c2a6acbcaadc8e2126368d5a0a734929957df7c37f4 pid=2055 runtime=io.containerd.runc.v2 Feb 12 19:08:46.833966 systemd[1]: Started cri-containerd-fade1c204dc689656b330c2a6acbcaadc8e2126368d5a0a734929957df7c37f4.scope. Feb 12 19:08:46.843859 env[1143]: time="2024-02-12T19:08:46.843775920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:08:46.844109 env[1143]: time="2024-02-12T19:08:46.844082603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:08:46.844238 env[1143]: time="2024-02-12T19:08:46.844199924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:08:46.844539 env[1143]: time="2024-02-12T19:08:46.844506087Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1bf2d9d239d2d95a860d8e6dda070464bbd1182a83167098e70dfa2d3aad58f pid=2081 runtime=io.containerd.runc.v2 Feb 12 19:08:46.869688 systemd[1]: Started cri-containerd-a1bf2d9d239d2d95a860d8e6dda070464bbd1182a83167098e70dfa2d3aad58f.scope. Feb 12 19:08:46.883786 env[1143]: time="2024-02-12T19:08:46.883740851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-c8cxd,Uid:5415a516-7d38-40f3-a63f-2b47f7707bac,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"fade1c204dc689656b330c2a6acbcaadc8e2126368d5a0a734929957df7c37f4\"" Feb 12 19:08:46.884650 kubelet[1968]: E0212 19:08:46.884617 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:46.886173 env[1143]: time="2024-02-12T19:08:46.885865113Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 12 19:08:46.905247 env[1143]: time="2024-02-12T19:08:46.905203832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dlggp,Uid:a6155dfa-161e-43b8-8dfe-8e2395ffb5d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1bf2d9d239d2d95a860d8e6dda070464bbd1182a83167098e70dfa2d3aad58f\"" Feb 12 19:08:46.905793 kubelet[1968]: E0212 19:08:46.905763 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:46.907904 env[1143]: time="2024-02-12T19:08:46.907868459Z" level=info msg="CreateContainer within sandbox \"a1bf2d9d239d2d95a860d8e6dda070464bbd1182a83167098e70dfa2d3aad58f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:08:46.920193 env[1143]: time="2024-02-12T19:08:46.920146265Z" level=info msg="CreateContainer within sandbox \"a1bf2d9d239d2d95a860d8e6dda070464bbd1182a83167098e70dfa2d3aad58f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8bdcabfffd544b1ecaec27781853a4cc11d094be256d5560f3bad9251b7b9136\"" Feb 12 19:08:46.920818 env[1143]: time="2024-02-12T19:08:46.920776792Z" level=info msg="StartContainer for \"8bdcabfffd544b1ecaec27781853a4cc11d094be256d5560f3bad9251b7b9136\"" Feb 12 19:08:46.934975 systemd[1]: Started cri-containerd-8bdcabfffd544b1ecaec27781853a4cc11d094be256d5560f3bad9251b7b9136.scope. Feb 12 19:08:46.973746 env[1143]: time="2024-02-12T19:08:46.972942248Z" level=info msg="StartContainer for \"8bdcabfffd544b1ecaec27781853a4cc11d094be256d5560f3bad9251b7b9136\" returns successfully" Feb 12 19:08:47.515750 kubelet[1968]: E0212 19:08:47.515705 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:47.526448 kubelet[1968]: I0212 19:08:47.526417 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dlggp" podStartSLOduration=1.5263630849999998 podCreationTimestamp="2024-02-12 19:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:08:47.526079082 +0000 UTC m=+16.159281047" watchObservedRunningTime="2024-02-12 19:08:47.526363085 +0000 UTC m=+16.159565050" Feb 12 19:08:47.966279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3767988113.mount: Deactivated successfully. Feb 12 19:08:48.103541 env[1143]: time="2024-02-12T19:08:48.103474614Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:48.105436 env[1143]: time="2024-02-12T19:08:48.105394712Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:48.106911 env[1143]: time="2024-02-12T19:08:48.106880326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:48.108426 env[1143]: time="2024-02-12T19:08:48.108393300Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:48.109627 env[1143]: time="2024-02-12T19:08:48.109560631Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 12 19:08:48.111538 env[1143]: time="2024-02-12T19:08:48.111505129Z" level=info msg="CreateContainer within sandbox \"fade1c204dc689656b330c2a6acbcaadc8e2126368d5a0a734929957df7c37f4\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 12 19:08:48.122803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1307076956.mount: Deactivated successfully. Feb 12 19:08:48.127161 env[1143]: time="2024-02-12T19:08:48.127085115Z" level=info msg="CreateContainer within sandbox \"fade1c204dc689656b330c2a6acbcaadc8e2126368d5a0a734929957df7c37f4\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"3326602a3b1e7ba282ade8222248b85de752b15e3126b48ada6192d99ed391c7\"" Feb 12 19:08:48.127837 env[1143]: time="2024-02-12T19:08:48.127706521Z" level=info msg="StartContainer for \"3326602a3b1e7ba282ade8222248b85de752b15e3126b48ada6192d99ed391c7\"" Feb 12 19:08:48.141406 systemd[1]: Started cri-containerd-3326602a3b1e7ba282ade8222248b85de752b15e3126b48ada6192d99ed391c7.scope. Feb 12 19:08:48.177482 env[1143]: time="2024-02-12T19:08:48.177436625Z" level=info msg="StartContainer for \"3326602a3b1e7ba282ade8222248b85de752b15e3126b48ada6192d99ed391c7\" returns successfully" Feb 12 19:08:48.179890 systemd[1]: cri-containerd-3326602a3b1e7ba282ade8222248b85de752b15e3126b48ada6192d99ed391c7.scope: Deactivated successfully. Feb 12 19:08:48.213977 env[1143]: time="2024-02-12T19:08:48.213929206Z" level=info msg="shim disconnected" id=3326602a3b1e7ba282ade8222248b85de752b15e3126b48ada6192d99ed391c7 Feb 12 19:08:48.213977 env[1143]: time="2024-02-12T19:08:48.213976807Z" level=warning msg="cleaning up after shim disconnected" id=3326602a3b1e7ba282ade8222248b85de752b15e3126b48ada6192d99ed391c7 namespace=k8s.io Feb 12 19:08:48.214213 env[1143]: time="2024-02-12T19:08:48.213985967Z" level=info msg="cleaning up dead shim" Feb 12 19:08:48.221003 env[1143]: time="2024-02-12T19:08:48.220893632Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:08:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2333 runtime=io.containerd.runc.v2\n" Feb 12 19:08:48.519905 kubelet[1968]: E0212 19:08:48.519811 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:48.521055 env[1143]: time="2024-02-12T19:08:48.521018756Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 12 19:08:49.639452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount11749313.mount: Deactivated successfully. Feb 12 19:08:50.636180 env[1143]: time="2024-02-12T19:08:50.636102696Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:50.653168 env[1143]: time="2024-02-12T19:08:50.653103481Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:50.668113 env[1143]: time="2024-02-12T19:08:50.667561684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:50.669706 env[1143]: time="2024-02-12T19:08:50.669665422Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:08:50.672607 env[1143]: time="2024-02-12T19:08:50.670436869Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 12 19:08:50.676341 env[1143]: time="2024-02-12T19:08:50.676304239Z" level=info msg="CreateContainer within sandbox \"fade1c204dc689656b330c2a6acbcaadc8e2126368d5a0a734929957df7c37f4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 19:08:50.685284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941830903.mount: Deactivated successfully. Feb 12 19:08:50.686803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1007960562.mount: Deactivated successfully. Feb 12 19:08:50.689541 env[1143]: time="2024-02-12T19:08:50.689497351Z" level=info msg="CreateContainer within sandbox \"fade1c204dc689656b330c2a6acbcaadc8e2126368d5a0a734929957df7c37f4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"04517b03187bc71fb356a1e1f0b9e72d8f5bfec6674a54b22ce62c57ccad0f90\"" Feb 12 19:08:50.690219 env[1143]: time="2024-02-12T19:08:50.690191677Z" level=info msg="StartContainer for \"04517b03187bc71fb356a1e1f0b9e72d8f5bfec6674a54b22ce62c57ccad0f90\"" Feb 12 19:08:50.710919 systemd[1]: Started cri-containerd-04517b03187bc71fb356a1e1f0b9e72d8f5bfec6674a54b22ce62c57ccad0f90.scope. Feb 12 19:08:50.755088 env[1143]: time="2024-02-12T19:08:50.755036190Z" level=info msg="StartContainer for \"04517b03187bc71fb356a1e1f0b9e72d8f5bfec6674a54b22ce62c57ccad0f90\" returns successfully" Feb 12 19:08:50.756436 systemd[1]: cri-containerd-04517b03187bc71fb356a1e1f0b9e72d8f5bfec6674a54b22ce62c57ccad0f90.scope: Deactivated successfully. Feb 12 19:08:50.806913 kubelet[1968]: I0212 19:08:50.806185 1968 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:08:50.842540 kubelet[1968]: I0212 19:08:50.842495 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:08:50.852425 systemd[1]: Created slice kubepods-burstable-podaf208b29_9ac4_41df_9792_d308036b329d.slice. Feb 12 19:08:50.861227 kubelet[1968]: I0212 19:08:50.861194 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:08:50.866024 systemd[1]: Created slice kubepods-burstable-pod914dce90_cc38_40c1_99a2_9c0b5867a546.slice. Feb 12 19:08:50.889658 env[1143]: time="2024-02-12T19:08:50.889485696Z" level=info msg="shim disconnected" id=04517b03187bc71fb356a1e1f0b9e72d8f5bfec6674a54b22ce62c57ccad0f90 Feb 12 19:08:50.890041 kubelet[1968]: I0212 19:08:50.890010 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcp65\" (UniqueName: \"kubernetes.io/projected/af208b29-9ac4-41df-9792-d308036b329d-kube-api-access-gcp65\") pod \"coredns-5d78c9869d-x5sd2\" (UID: \"af208b29-9ac4-41df-9792-d308036b329d\") " pod="kube-system/coredns-5d78c9869d-x5sd2" Feb 12 19:08:50.890160 kubelet[1968]: I0212 19:08:50.890147 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56nrl\" (UniqueName: \"kubernetes.io/projected/914dce90-cc38-40c1-99a2-9c0b5867a546-kube-api-access-56nrl\") pod \"coredns-5d78c9869d-229xz\" (UID: \"914dce90-cc38-40c1-99a2-9c0b5867a546\") " pod="kube-system/coredns-5d78c9869d-229xz" Feb 12 19:08:50.890582 kubelet[1968]: I0212 19:08:50.890562 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af208b29-9ac4-41df-9792-d308036b329d-config-volume\") pod \"coredns-5d78c9869d-x5sd2\" (UID: \"af208b29-9ac4-41df-9792-d308036b329d\") " pod="kube-system/coredns-5d78c9869d-x5sd2" Feb 12 19:08:50.890685 kubelet[1968]: I0212 19:08:50.890673 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/914dce90-cc38-40c1-99a2-9c0b5867a546-config-volume\") pod \"coredns-5d78c9869d-229xz\" (UID: \"914dce90-cc38-40c1-99a2-9c0b5867a546\") " pod="kube-system/coredns-5d78c9869d-229xz" Feb 12 19:08:50.890822 env[1143]: time="2024-02-12T19:08:50.890794347Z" level=warning msg="cleaning up after shim disconnected" id=04517b03187bc71fb356a1e1f0b9e72d8f5bfec6674a54b22ce62c57ccad0f90 namespace=k8s.io Feb 12 19:08:50.890896 env[1143]: time="2024-02-12T19:08:50.890881068Z" level=info msg="cleaning up dead shim" Feb 12 19:08:50.897443 env[1143]: time="2024-02-12T19:08:50.897403803Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:08:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2389 runtime=io.containerd.runc.v2\n" Feb 12 19:08:51.156261 kubelet[1968]: E0212 19:08:51.156144 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:51.157068 env[1143]: time="2024-02-12T19:08:51.156683395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-x5sd2,Uid:af208b29-9ac4-41df-9792-d308036b329d,Namespace:kube-system,Attempt:0,}" Feb 12 19:08:51.168458 kubelet[1968]: E0212 19:08:51.168424 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:51.168883 env[1143]: time="2024-02-12T19:08:51.168837134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-229xz,Uid:914dce90-cc38-40c1-99a2-9c0b5867a546,Namespace:kube-system,Attempt:0,}" Feb 12 19:08:51.199213 env[1143]: time="2024-02-12T19:08:51.199136741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-x5sd2,Uid:af208b29-9ac4-41df-9792-d308036b329d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db244a6a937977062ce94025b0e12abb19dd490ce2b6df0cbf2bd6619d4c325a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 12 19:08:51.199502 kubelet[1968]: E0212 19:08:51.199465 1968 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db244a6a937977062ce94025b0e12abb19dd490ce2b6df0cbf2bd6619d4c325a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 12 19:08:51.199579 kubelet[1968]: E0212 19:08:51.199547 1968 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db244a6a937977062ce94025b0e12abb19dd490ce2b6df0cbf2bd6619d4c325a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-x5sd2" Feb 12 19:08:51.199579 kubelet[1968]: E0212 19:08:51.199570 1968 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db244a6a937977062ce94025b0e12abb19dd490ce2b6df0cbf2bd6619d4c325a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-x5sd2" Feb 12 19:08:51.199673 kubelet[1968]: E0212 19:08:51.199650 1968 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5d78c9869d-x5sd2_kube-system(af208b29-9ac4-41df-9792-d308036b329d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5d78c9869d-x5sd2_kube-system(af208b29-9ac4-41df-9792-d308036b329d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db244a6a937977062ce94025b0e12abb19dd490ce2b6df0cbf2bd6619d4c325a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5d78c9869d-x5sd2" podUID=af208b29-9ac4-41df-9792-d308036b329d Feb 12 19:08:51.201639 env[1143]: time="2024-02-12T19:08:51.201574881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-229xz,Uid:914dce90-cc38-40c1-99a2-9c0b5867a546,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6e59e06e0e69cfc34a25c79a10be62932ffe5bf0ce8f2b9c4018ca0a339e0c2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 12 19:08:51.201826 kubelet[1968]: E0212 19:08:51.201794 1968 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6e59e06e0e69cfc34a25c79a10be62932ffe5bf0ce8f2b9c4018ca0a339e0c2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 12 19:08:51.201880 kubelet[1968]: E0212 19:08:51.201838 1968 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6e59e06e0e69cfc34a25c79a10be62932ffe5bf0ce8f2b9c4018ca0a339e0c2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-229xz" Feb 12 19:08:51.201880 kubelet[1968]: E0212 19:08:51.201866 1968 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6e59e06e0e69cfc34a25c79a10be62932ffe5bf0ce8f2b9c4018ca0a339e0c2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-229xz" Feb 12 19:08:51.201949 kubelet[1968]: E0212 19:08:51.201912 1968 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5d78c9869d-229xz_kube-system(914dce90-cc38-40c1-99a2-9c0b5867a546)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5d78c9869d-229xz_kube-system(914dce90-cc38-40c1-99a2-9c0b5867a546)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6e59e06e0e69cfc34a25c79a10be62932ffe5bf0ce8f2b9c4018ca0a339e0c2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5d78c9869d-229xz" podUID=914dce90-cc38-40c1-99a2-9c0b5867a546 Feb 12 19:08:51.526012 kubelet[1968]: E0212 19:08:51.525975 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:51.528337 env[1143]: time="2024-02-12T19:08:51.528261023Z" level=info msg="CreateContainer within sandbox \"fade1c204dc689656b330c2a6acbcaadc8e2126368d5a0a734929957df7c37f4\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 12 19:08:51.539346 env[1143]: time="2024-02-12T19:08:51.539279393Z" level=info msg="CreateContainer within sandbox \"fade1c204dc689656b330c2a6acbcaadc8e2126368d5a0a734929957df7c37f4\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"38ca59b3233aff86a951186e3317a3fe02756bcb75e42547b8cc1573098352ab\"" Feb 12 19:08:51.539831 env[1143]: time="2024-02-12T19:08:51.539798557Z" level=info msg="StartContainer for \"38ca59b3233aff86a951186e3317a3fe02756bcb75e42547b8cc1573098352ab\"" Feb 12 19:08:51.557709 systemd[1]: Started cri-containerd-38ca59b3233aff86a951186e3317a3fe02756bcb75e42547b8cc1573098352ab.scope. Feb 12 19:08:51.600734 env[1143]: time="2024-02-12T19:08:51.599570524Z" level=info msg="StartContainer for \"38ca59b3233aff86a951186e3317a3fe02756bcb75e42547b8cc1573098352ab\" returns successfully" Feb 12 19:08:51.685536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04517b03187bc71fb356a1e1f0b9e72d8f5bfec6674a54b22ce62c57ccad0f90-rootfs.mount: Deactivated successfully. Feb 12 19:08:52.530164 kubelet[1968]: E0212 19:08:52.530118 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:52.674647 systemd-networkd[1052]: flannel.1: Link UP Feb 12 19:08:52.674653 systemd-networkd[1052]: flannel.1: Gained carrier Feb 12 19:08:53.531895 kubelet[1968]: E0212 19:08:53.531866 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:08:53.939508 systemd-networkd[1052]: flannel.1: Gained IPv6LL Feb 12 19:09:00.554714 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:40230.service. Feb 12 19:09:00.597060 sshd[2603]: Accepted publickey for core from 10.0.0.1 port 40230 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:00.598265 sshd[2603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:00.601791 systemd-logind[1126]: New session 6 of user core. Feb 12 19:09:00.602845 systemd[1]: Started session-6.scope. Feb 12 19:09:00.721097 sshd[2603]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:00.723893 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:40230.service: Deactivated successfully. Feb 12 19:09:00.725219 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:09:00.726274 systemd-logind[1126]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:09:00.727216 systemd-logind[1126]: Removed session 6. Feb 12 19:09:02.454403 kubelet[1968]: E0212 19:09:02.454349 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:02.454743 kubelet[1968]: E0212 19:09:02.454444 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:02.454827 env[1143]: time="2024-02-12T19:09:02.454775752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-x5sd2,Uid:af208b29-9ac4-41df-9792-d308036b329d,Namespace:kube-system,Attempt:0,}" Feb 12 19:09:02.455025 env[1143]: time="2024-02-12T19:09:02.454801512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-229xz,Uid:914dce90-cc38-40c1-99a2-9c0b5867a546,Namespace:kube-system,Attempt:0,}" Feb 12 19:09:02.534998 systemd-networkd[1052]: cni0: Link UP Feb 12 19:09:02.535007 systemd-networkd[1052]: cni0: Gained carrier Feb 12 19:09:02.535850 systemd-networkd[1052]: cni0: Lost carrier Feb 12 19:09:02.554299 systemd-networkd[1052]: veth7ee01a55: Link UP Feb 12 19:09:02.555956 kernel: cni0: port 1(veth7ee01a55) entered blocking state Feb 12 19:09:02.556057 kernel: cni0: port 1(veth7ee01a55) entered disabled state Feb 12 19:09:02.556734 kernel: device veth7ee01a55 entered promiscuous mode Feb 12 19:09:02.559357 kernel: cni0: port 1(veth7ee01a55) entered blocking state Feb 12 19:09:02.559459 kernel: cni0: port 1(veth7ee01a55) entered forwarding state Feb 12 19:09:02.560931 kernel: cni0: port 1(veth7ee01a55) entered disabled state Feb 12 19:09:02.560844 systemd-networkd[1052]: veth54174242: Link UP Feb 12 19:09:02.563587 kernel: cni0: port 2(veth54174242) entered blocking state Feb 12 19:09:02.563659 kernel: cni0: port 2(veth54174242) entered disabled state Feb 12 19:09:02.565849 kernel: device veth54174242 entered promiscuous mode Feb 12 19:09:02.567227 kernel: cni0: port 2(veth54174242) entered blocking state Feb 12 19:09:02.567472 kernel: cni0: port 2(veth54174242) entered forwarding state Feb 12 19:09:02.569425 kernel: cni0: port 2(veth54174242) entered disabled state Feb 12 19:09:02.569490 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7ee01a55: link becomes ready Feb 12 19:09:02.570497 kernel: cni0: port 1(veth7ee01a55) entered blocking state Feb 12 19:09:02.570558 kernel: cni0: port 1(veth7ee01a55) entered forwarding state Feb 12 19:09:02.570659 systemd-networkd[1052]: veth7ee01a55: Gained carrier Feb 12 19:09:02.572476 systemd-networkd[1052]: cni0: Gained carrier Feb 12 19:09:02.575167 env[1143]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000020928), "name":"cbr0", "type":"bridge"} Feb 12 19:09:02.575167 env[1143]: delegateAdd: netconf sent to delegate plugin: Feb 12 19:09:02.576415 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth54174242: link becomes ready Feb 12 19:09:02.576493 kernel: cni0: port 2(veth54174242) entered blocking state Feb 12 19:09:02.576510 kernel: cni0: port 2(veth54174242) entered forwarding state Feb 12 19:09:02.576954 systemd-networkd[1052]: veth54174242: Gained carrier Feb 12 19:09:02.579507 env[1143]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Feb 12 19:09:02.579507 env[1143]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} Feb 12 19:09:02.579507 env[1143]: delegateAdd: netconf sent to delegate plugin: Feb 12 19:09:02.591585 env[1143]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-12T19:09:02.591468517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:09:02.591585 env[1143]: time="2024-02-12T19:09:02.591555357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:09:02.591785 env[1143]: time="2024-02-12T19:09:02.591566397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:09:02.592262 env[1143]: time="2024-02-12T19:09:02.592213041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:09:02.592333 env[1143]: time="2024-02-12T19:09:02.592278321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:09:02.592333 env[1143]: time="2024-02-12T19:09:02.592306081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:09:02.592333 env[1143]: time="2024-02-12T19:09:02.592298921Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7da7f47207a0483d5cdc59539147aad7de76b699a3cba13b34a057a4c5832aa9 pid=2701 runtime=io.containerd.runc.v2 Feb 12 19:09:02.592559 env[1143]: time="2024-02-12T19:09:02.592522882Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/451d62d0184984acddeda65002841649049131760da1dd664ebe332553f7be49 pid=2710 runtime=io.containerd.runc.v2 Feb 12 19:09:02.609010 systemd[1]: Started cri-containerd-7da7f47207a0483d5cdc59539147aad7de76b699a3cba13b34a057a4c5832aa9.scope. Feb 12 19:09:02.621099 systemd[1]: Started cri-containerd-451d62d0184984acddeda65002841649049131760da1dd664ebe332553f7be49.scope. Feb 12 19:09:02.660589 systemd-resolved[1085]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:09:02.664056 systemd-resolved[1085]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:09:02.682565 env[1143]: time="2024-02-12T19:09:02.682514759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-x5sd2,Uid:af208b29-9ac4-41df-9792-d308036b329d,Namespace:kube-system,Attempt:0,} returns sandbox id \"451d62d0184984acddeda65002841649049131760da1dd664ebe332553f7be49\"" Feb 12 19:09:02.684508 kubelet[1968]: E0212 19:09:02.683260 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:02.685136 env[1143]: time="2024-02-12T19:09:02.685103573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-229xz,Uid:914dce90-cc38-40c1-99a2-9c0b5867a546,Namespace:kube-system,Attempt:0,} returns sandbox id \"7da7f47207a0483d5cdc59539147aad7de76b699a3cba13b34a057a4c5832aa9\"" Feb 12 19:09:02.690737 kubelet[1968]: E0212 19:09:02.690315 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:02.691109 env[1143]: time="2024-02-12T19:09:02.691075285Z" level=info msg="CreateContainer within sandbox \"451d62d0184984acddeda65002841649049131760da1dd664ebe332553f7be49\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:09:02.692747 env[1143]: time="2024-02-12T19:09:02.692705173Z" level=info msg="CreateContainer within sandbox \"7da7f47207a0483d5cdc59539147aad7de76b699a3cba13b34a057a4c5832aa9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:09:02.805670 env[1143]: time="2024-02-12T19:09:02.805556772Z" level=info msg="CreateContainer within sandbox \"451d62d0184984acddeda65002841649049131760da1dd664ebe332553f7be49\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1bff4c0a9e07dd97e3b78ca5810b975512250887becad966af0392da12de7298\"" Feb 12 19:09:02.806952 env[1143]: time="2024-02-12T19:09:02.806836658Z" level=info msg="StartContainer for \"1bff4c0a9e07dd97e3b78ca5810b975512250887becad966af0392da12de7298\"" Feb 12 19:09:02.811421 env[1143]: time="2024-02-12T19:09:02.811365642Z" level=info msg="CreateContainer within sandbox \"7da7f47207a0483d5cdc59539147aad7de76b699a3cba13b34a057a4c5832aa9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d81609dd4e6dc906a8442bae989ff065f1fd707dbfc3d8d2090d78e84e37435f\"" Feb 12 19:09:02.812183 env[1143]: time="2024-02-12T19:09:02.812144127Z" level=info msg="StartContainer for \"d81609dd4e6dc906a8442bae989ff065f1fd707dbfc3d8d2090d78e84e37435f\"" Feb 12 19:09:02.826867 systemd[1]: Started cri-containerd-1bff4c0a9e07dd97e3b78ca5810b975512250887becad966af0392da12de7298.scope. Feb 12 19:09:02.833316 systemd[1]: Started cri-containerd-d81609dd4e6dc906a8442bae989ff065f1fd707dbfc3d8d2090d78e84e37435f.scope. Feb 12 19:09:02.897499 env[1143]: time="2024-02-12T19:09:02.897440059Z" level=info msg="StartContainer for \"1bff4c0a9e07dd97e3b78ca5810b975512250887becad966af0392da12de7298\" returns successfully" Feb 12 19:09:02.900216 env[1143]: time="2024-02-12T19:09:02.900165433Z" level=info msg="StartContainer for \"d81609dd4e6dc906a8442bae989ff065f1fd707dbfc3d8d2090d78e84e37435f\" returns successfully" Feb 12 19:09:03.551683 kubelet[1968]: E0212 19:09:03.551654 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:03.552775 kubelet[1968]: E0212 19:09:03.552753 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:03.564594 kubelet[1968]: I0212 19:09:03.564560 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-x5sd2" podStartSLOduration=17.564503178 podCreationTimestamp="2024-02-12 19:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:09:03.564019615 +0000 UTC m=+32.197221580" watchObservedRunningTime="2024-02-12 19:09:03.564503178 +0000 UTC m=+32.197705143" Feb 12 19:09:03.564820 kubelet[1968]: I0212 19:09:03.564804 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-c8cxd" podStartSLOduration=13.779433615 podCreationTimestamp="2024-02-12 19:08:46 +0000 UTC" firstStartedPulling="2024-02-12 19:08:46.885343347 +0000 UTC m=+15.518545312" lastFinishedPulling="2024-02-12 19:08:50.670696471 +0000 UTC m=+19.303898436" observedRunningTime="2024-02-12 19:08:52.540595844 +0000 UTC m=+21.173797809" watchObservedRunningTime="2024-02-12 19:09:03.564786739 +0000 UTC m=+32.197988704" Feb 12 19:09:03.591494 kubelet[1968]: I0212 19:09:03.591464 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-229xz" podStartSLOduration=17.591426156 podCreationTimestamp="2024-02-12 19:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:09:03.591327436 +0000 UTC m=+32.224529401" watchObservedRunningTime="2024-02-12 19:09:03.591426156 +0000 UTC m=+32.224628121" Feb 12 19:09:03.795520 systemd-networkd[1052]: veth54174242: Gained IPv6LL Feb 12 19:09:04.243574 systemd-networkd[1052]: veth7ee01a55: Gained IPv6LL Feb 12 19:09:04.499551 systemd-networkd[1052]: cni0: Gained IPv6LL Feb 12 19:09:04.554290 kubelet[1968]: E0212 19:09:04.554261 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:04.554610 kubelet[1968]: E0212 19:09:04.554310 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:09:05.726735 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:58260.service. Feb 12 19:09:05.771907 sshd[2879]: Accepted publickey for core from 10.0.0.1 port 58260 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:05.773324 sshd[2879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:05.777715 systemd-logind[1126]: New session 7 of user core. Feb 12 19:09:05.779361 systemd[1]: Started session-7.scope. Feb 12 19:09:05.918394 sshd[2879]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:05.922674 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:58260.service: Deactivated successfully. Feb 12 19:09:05.923435 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:09:05.924091 systemd-logind[1126]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:09:05.924918 systemd-logind[1126]: Removed session 7. Feb 12 19:09:10.922946 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:58276.service. Feb 12 19:09:10.974745 sshd[2914]: Accepted publickey for core from 10.0.0.1 port 58276 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:10.976186 sshd[2914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:10.980349 systemd-logind[1126]: New session 8 of user core. Feb 12 19:09:10.981143 systemd[1]: Started session-8.scope. Feb 12 19:09:11.099641 sshd[2914]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:11.103491 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:58282.service. Feb 12 19:09:11.104595 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:58276.service: Deactivated successfully. Feb 12 19:09:11.105351 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:09:11.109012 systemd-logind[1126]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:09:11.110821 systemd-logind[1126]: Removed session 8. Feb 12 19:09:11.149773 sshd[2927]: Accepted publickey for core from 10.0.0.1 port 58282 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:11.150136 sshd[2927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:11.154584 systemd-logind[1126]: New session 9 of user core. Feb 12 19:09:11.154657 systemd[1]: Started session-9.scope. Feb 12 19:09:11.417837 sshd[2927]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:11.420871 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:58288.service. Feb 12 19:09:11.432792 systemd-logind[1126]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:09:11.433029 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:58282.service: Deactivated successfully. Feb 12 19:09:11.433807 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:09:11.434494 systemd-logind[1126]: Removed session 9. Feb 12 19:09:11.474384 sshd[2939]: Accepted publickey for core from 10.0.0.1 port 58288 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:11.475729 sshd[2939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:11.479671 systemd-logind[1126]: New session 10 of user core. Feb 12 19:09:11.480300 systemd[1]: Started session-10.scope. Feb 12 19:09:11.610671 sshd[2939]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:11.613028 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:09:11.613599 systemd-logind[1126]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:09:11.613721 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:58288.service: Deactivated successfully. Feb 12 19:09:11.614644 systemd-logind[1126]: Removed session 10. Feb 12 19:09:16.614736 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:53178.service. Feb 12 19:09:16.661161 sshd[2974]: Accepted publickey for core from 10.0.0.1 port 53178 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:16.662888 sshd[2974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:16.668004 systemd[1]: Started session-11.scope. Feb 12 19:09:16.669420 systemd-logind[1126]: New session 11 of user core. Feb 12 19:09:16.795323 sshd[2974]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:16.799708 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:53178.service: Deactivated successfully. Feb 12 19:09:16.800747 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:09:16.801557 systemd-logind[1126]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:09:16.802606 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:53194.service. Feb 12 19:09:16.803312 systemd-logind[1126]: Removed session 11. Feb 12 19:09:16.845491 sshd[2987]: Accepted publickey for core from 10.0.0.1 port 53194 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:16.846689 sshd[2987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:16.849684 systemd-logind[1126]: New session 12 of user core. Feb 12 19:09:16.850696 systemd[1]: Started session-12.scope. Feb 12 19:09:17.062582 sshd[2987]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:17.066000 systemd-logind[1126]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:09:17.066965 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:53198.service. Feb 12 19:09:17.067473 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:53194.service: Deactivated successfully. Feb 12 19:09:17.068116 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:09:17.068824 systemd-logind[1126]: Removed session 12. Feb 12 19:09:17.108655 sshd[2999]: Accepted publickey for core from 10.0.0.1 port 53198 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:17.110049 sshd[2999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:17.113428 systemd-logind[1126]: New session 13 of user core. Feb 12 19:09:17.113931 systemd[1]: Started session-13.scope. Feb 12 19:09:17.958919 sshd[2999]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:17.962012 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:53202.service. Feb 12 19:09:17.964928 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:53198.service: Deactivated successfully. Feb 12 19:09:17.965689 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:09:17.966252 systemd-logind[1126]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:09:17.967044 systemd-logind[1126]: Removed session 13. Feb 12 19:09:18.009300 sshd[3043]: Accepted publickey for core from 10.0.0.1 port 53202 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:18.010782 sshd[3043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:18.014436 systemd-logind[1126]: New session 14 of user core. Feb 12 19:09:18.014900 systemd[1]: Started session-14.scope. Feb 12 19:09:18.326384 sshd[3043]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:18.332350 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:53218.service. Feb 12 19:09:18.334230 systemd-logind[1126]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:09:18.334484 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:53202.service: Deactivated successfully. Feb 12 19:09:18.335307 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:09:18.337128 systemd-logind[1126]: Removed session 14. Feb 12 19:09:18.382000 sshd[3054]: Accepted publickey for core from 10.0.0.1 port 53218 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:18.383313 sshd[3054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:18.387058 systemd-logind[1126]: New session 15 of user core. Feb 12 19:09:18.387498 systemd[1]: Started session-15.scope. Feb 12 19:09:18.498286 sshd[3054]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:18.500684 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:53218.service: Deactivated successfully. Feb 12 19:09:18.501477 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:09:18.502073 systemd-logind[1126]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:09:18.502737 systemd-logind[1126]: Removed session 15. Feb 12 19:09:23.502785 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:39508.service. Feb 12 19:09:23.547855 sshd[3094]: Accepted publickey for core from 10.0.0.1 port 39508 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:23.549273 sshd[3094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:23.554505 systemd[1]: Started session-16.scope. Feb 12 19:09:23.555380 systemd-logind[1126]: New session 16 of user core. Feb 12 19:09:23.669581 sshd[3094]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:23.672249 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:39508.service: Deactivated successfully. Feb 12 19:09:23.673109 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:09:23.674938 systemd-logind[1126]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:09:23.675807 systemd-logind[1126]: Removed session 16. Feb 12 19:09:28.674560 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:39518.service. Feb 12 19:09:28.717078 sshd[3129]: Accepted publickey for core from 10.0.0.1 port 39518 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:28.718672 sshd[3129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:28.722197 systemd-logind[1126]: New session 17 of user core. Feb 12 19:09:28.722656 systemd[1]: Started session-17.scope. Feb 12 19:09:28.832007 sshd[3129]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:28.834257 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:39518.service: Deactivated successfully. Feb 12 19:09:28.835101 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:09:28.835697 systemd-logind[1126]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:09:28.836465 systemd-logind[1126]: Removed session 17. Feb 12 19:09:33.836880 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:59034.service. Feb 12 19:09:33.880571 sshd[3165]: Accepted publickey for core from 10.0.0.1 port 59034 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:09:33.881806 sshd[3165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:09:33.885640 systemd-logind[1126]: New session 18 of user core. Feb 12 19:09:33.886095 systemd[1]: Started session-18.scope. Feb 12 19:09:33.998556 sshd[3165]: pam_unix(sshd:session): session closed for user core Feb 12 19:09:34.002732 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:59034.service: Deactivated successfully. Feb 12 19:09:34.003649 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:09:34.004975 systemd-logind[1126]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:09:34.005947 systemd-logind[1126]: Removed session 18.