Apr 12 18:22:28.729905 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 12 18:22:28.729924 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Apr 12 17:21:24 -00 2024 Apr 12 18:22:28.729932 kernel: efi: EFI v2.70 by EDK II Apr 12 18:22:28.729937 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Apr 12 18:22:28.729942 kernel: random: crng init done Apr 12 18:22:28.729947 kernel: ACPI: Early table checksum verification disabled Apr 12 18:22:28.729954 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Apr 12 18:22:28.729960 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Apr 12 18:22:28.729973 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:22:28.729978 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:22:28.729984 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:22:28.729989 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:22:28.729995 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:22:28.730000 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:22:28.730008 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:22:28.730014 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:22:28.730020 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:22:28.730026 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Apr 12 18:22:28.730031 kernel: NUMA: Failed to initialise from firmware Apr 12 18:22:28.730037 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Apr 12 18:22:28.730042 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Apr 12 18:22:28.730048 kernel: Zone ranges: Apr 12 18:22:28.730053 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Apr 12 18:22:28.730060 kernel: DMA32 empty Apr 12 18:22:28.730065 kernel: Normal empty Apr 12 18:22:28.730071 kernel: Movable zone start for each node Apr 12 18:22:28.730076 kernel: Early memory node ranges Apr 12 18:22:28.730082 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Apr 12 18:22:28.730088 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Apr 12 18:22:28.730093 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Apr 12 18:22:28.730099 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Apr 12 18:22:28.730104 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Apr 12 18:22:28.730110 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Apr 12 18:22:28.730115 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Apr 12 18:22:28.730121 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Apr 12 18:22:28.730127 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Apr 12 18:22:28.730133 kernel: psci: probing for conduit method from ACPI. Apr 12 18:22:28.730138 kernel: psci: PSCIv1.1 detected in firmware. Apr 12 18:22:28.730144 kernel: psci: Using standard PSCI v0.2 function IDs Apr 12 18:22:28.730149 kernel: psci: Trusted OS migration not required Apr 12 18:22:28.730158 kernel: psci: SMC Calling Convention v1.1 Apr 12 18:22:28.730164 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 12 18:22:28.730171 kernel: ACPI: SRAT not present Apr 12 18:22:28.730177 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Apr 12 18:22:28.730183 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Apr 12 18:22:28.730189 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Apr 12 18:22:28.730195 kernel: Detected PIPT I-cache on CPU0 Apr 12 18:22:28.730201 kernel: CPU features: detected: GIC system register CPU interface Apr 12 18:22:28.730207 kernel: CPU features: detected: Hardware dirty bit management Apr 12 18:22:28.730213 kernel: CPU features: detected: Spectre-v4 Apr 12 18:22:28.730219 kernel: CPU features: detected: Spectre-BHB Apr 12 18:22:28.730226 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 12 18:22:28.730232 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 12 18:22:28.730238 kernel: CPU features: detected: ARM erratum 1418040 Apr 12 18:22:28.730244 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Apr 12 18:22:28.730250 kernel: Policy zone: DMA Apr 12 18:22:28.730257 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:22:28.730263 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:22:28.730269 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:22:28.730275 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:22:28.730281 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:22:28.730288 kernel: Memory: 2457464K/2572288K available (9792K kernel code, 2092K rwdata, 7568K rodata, 36352K init, 777K bss, 114824K reserved, 0K cma-reserved) Apr 12 18:22:28.730295 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 12 18:22:28.730301 kernel: trace event string verifier disabled Apr 12 18:22:28.730307 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 12 18:22:28.730313 kernel: rcu: RCU event tracing is enabled. Apr 12 18:22:28.730319 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 12 18:22:28.730325 kernel: Trampoline variant of Tasks RCU enabled. Apr 12 18:22:28.730332 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:22:28.730338 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:22:28.730344 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 12 18:22:28.730350 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 12 18:22:28.730355 kernel: GICv3: 256 SPIs implemented Apr 12 18:22:28.730363 kernel: GICv3: 0 Extended SPIs implemented Apr 12 18:22:28.730369 kernel: GICv3: Distributor has no Range Selector support Apr 12 18:22:28.730374 kernel: Root IRQ handler: gic_handle_irq Apr 12 18:22:28.730380 kernel: GICv3: 16 PPIs implemented Apr 12 18:22:28.730386 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 12 18:22:28.730392 kernel: ACPI: SRAT not present Apr 12 18:22:28.730398 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 12 18:22:28.730404 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Apr 12 18:22:28.730410 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Apr 12 18:22:28.730416 kernel: GICv3: using LPI property table @0x00000000400d0000 Apr 12 18:22:28.730422 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Apr 12 18:22:28.730428 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:22:28.730436 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 12 18:22:28.730442 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 12 18:22:28.730448 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 12 18:22:28.730454 kernel: arm-pv: using stolen time PV Apr 12 18:22:28.730460 kernel: Console: colour dummy device 80x25 Apr 12 18:22:28.730466 kernel: ACPI: Core revision 20210730 Apr 12 18:22:28.730473 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 12 18:22:28.730479 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:22:28.730485 kernel: LSM: Security Framework initializing Apr 12 18:22:28.730491 kernel: SELinux: Initializing. Apr 12 18:22:28.730499 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:22:28.730505 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:22:28.730511 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:22:28.730517 kernel: Platform MSI: ITS@0x8080000 domain created Apr 12 18:22:28.730523 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 12 18:22:28.730529 kernel: Remapping and enabling EFI services. Apr 12 18:22:28.730535 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:22:28.730542 kernel: Detected PIPT I-cache on CPU1 Apr 12 18:22:28.730548 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 12 18:22:28.730555 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Apr 12 18:22:28.730561 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:22:28.730567 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 12 18:22:28.730574 kernel: Detected PIPT I-cache on CPU2 Apr 12 18:22:28.730580 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Apr 12 18:22:28.730586 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Apr 12 18:22:28.730592 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:22:28.730598 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Apr 12 18:22:28.730604 kernel: Detected PIPT I-cache on CPU3 Apr 12 18:22:28.730610 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Apr 12 18:22:28.730618 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Apr 12 18:22:28.730624 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:22:28.730641 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Apr 12 18:22:28.730647 kernel: smp: Brought up 1 node, 4 CPUs Apr 12 18:22:28.730658 kernel: SMP: Total of 4 processors activated. Apr 12 18:22:28.730666 kernel: CPU features: detected: 32-bit EL0 Support Apr 12 18:22:28.730673 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 12 18:22:28.730679 kernel: CPU features: detected: Common not Private translations Apr 12 18:22:28.730686 kernel: CPU features: detected: CRC32 instructions Apr 12 18:22:28.730692 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 12 18:22:28.730699 kernel: CPU features: detected: LSE atomic instructions Apr 12 18:22:28.730705 kernel: CPU features: detected: Privileged Access Never Apr 12 18:22:28.730713 kernel: CPU features: detected: RAS Extension Support Apr 12 18:22:28.730719 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 12 18:22:28.730726 kernel: CPU: All CPU(s) started at EL1 Apr 12 18:22:28.730732 kernel: alternatives: patching kernel code Apr 12 18:22:28.730740 kernel: devtmpfs: initialized Apr 12 18:22:28.730746 kernel: KASLR enabled Apr 12 18:22:28.730753 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:22:28.730759 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 12 18:22:28.730766 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:22:28.730772 kernel: SMBIOS 3.0.0 present. Apr 12 18:22:28.730779 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Apr 12 18:22:28.730785 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:22:28.730792 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 12 18:22:28.730798 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 12 18:22:28.730806 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 12 18:22:28.730812 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:22:28.730819 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Apr 12 18:22:28.730825 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:22:28.730832 kernel: cpuidle: using governor menu Apr 12 18:22:28.730838 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 12 18:22:28.730845 kernel: ASID allocator initialised with 32768 entries Apr 12 18:22:28.730851 kernel: ACPI: bus type PCI registered Apr 12 18:22:28.730858 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:22:28.730865 kernel: Serial: AMBA PL011 UART driver Apr 12 18:22:28.730872 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:22:28.730878 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Apr 12 18:22:28.730885 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:22:28.730891 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Apr 12 18:22:28.730897 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:22:28.730904 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 12 18:22:28.730911 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:22:28.730917 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:22:28.730924 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:22:28.730931 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:22:28.730937 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:22:28.730944 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:22:28.730950 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:22:28.730957 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 12 18:22:28.730967 kernel: ACPI: Interpreter enabled Apr 12 18:22:28.730974 kernel: ACPI: Using GIC for interrupt routing Apr 12 18:22:28.730980 kernel: ACPI: MCFG table detected, 1 entries Apr 12 18:22:28.730988 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 12 18:22:28.730994 kernel: printk: console [ttyAMA0] enabled Apr 12 18:22:28.731001 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 12 18:22:28.731135 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 12 18:22:28.731204 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 12 18:22:28.731264 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 12 18:22:28.731322 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 12 18:22:28.731391 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 12 18:22:28.731401 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 12 18:22:28.731407 kernel: PCI host bridge to bus 0000:00 Apr 12 18:22:28.731474 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 12 18:22:28.731527 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 12 18:22:28.731579 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 12 18:22:28.731660 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 12 18:22:28.731735 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 12 18:22:28.731802 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Apr 12 18:22:28.731861 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Apr 12 18:22:28.731918 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Apr 12 18:22:28.731982 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 12 18:22:28.732040 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 12 18:22:28.732097 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Apr 12 18:22:28.732155 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Apr 12 18:22:28.732206 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 12 18:22:28.732256 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 12 18:22:28.732308 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 12 18:22:28.732317 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 12 18:22:28.732323 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 12 18:22:28.732330 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 12 18:22:28.732338 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 12 18:22:28.732344 kernel: iommu: Default domain type: Translated Apr 12 18:22:28.732351 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 12 18:22:28.732357 kernel: vgaarb: loaded Apr 12 18:22:28.732364 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:22:28.732371 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:22:28.732378 kernel: PTP clock support registered Apr 12 18:22:28.732384 kernel: Registered efivars operations Apr 12 18:22:28.732391 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 12 18:22:28.732398 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:22:28.732405 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:22:28.732412 kernel: pnp: PnP ACPI init Apr 12 18:22:28.732478 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 12 18:22:28.732488 kernel: pnp: PnP ACPI: found 1 devices Apr 12 18:22:28.732494 kernel: NET: Registered PF_INET protocol family Apr 12 18:22:28.732501 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:22:28.732508 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 12 18:22:28.732514 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:22:28.732523 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:22:28.732530 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Apr 12 18:22:28.732536 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 12 18:22:28.732543 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:22:28.732549 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:22:28.732556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:22:28.732562 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:22:28.732569 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 12 18:22:28.732576 kernel: kvm [1]: HYP mode not available Apr 12 18:22:28.732583 kernel: Initialise system trusted keyrings Apr 12 18:22:28.732589 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 12 18:22:28.732596 kernel: Key type asymmetric registered Apr 12 18:22:28.732602 kernel: Asymmetric key parser 'x509' registered Apr 12 18:22:28.732609 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:22:28.732615 kernel: io scheduler mq-deadline registered Apr 12 18:22:28.732621 kernel: io scheduler kyber registered Apr 12 18:22:28.732634 kernel: io scheduler bfq registered Apr 12 18:22:28.732641 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 12 18:22:28.732649 kernel: ACPI: button: Power Button [PWRB] Apr 12 18:22:28.732657 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 12 18:22:28.732720 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Apr 12 18:22:28.732729 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:22:28.732735 kernel: thunder_xcv, ver 1.0 Apr 12 18:22:28.732742 kernel: thunder_bgx, ver 1.0 Apr 12 18:22:28.732748 kernel: nicpf, ver 1.0 Apr 12 18:22:28.732755 kernel: nicvf, ver 1.0 Apr 12 18:22:28.732820 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 12 18:22:28.732877 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-04-12T18:22:28 UTC (1712946148) Apr 12 18:22:28.732885 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 12 18:22:28.732892 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:22:28.732899 kernel: Segment Routing with IPv6 Apr 12 18:22:28.732906 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:22:28.732912 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:22:28.732919 kernel: Key type dns_resolver registered Apr 12 18:22:28.732925 kernel: registered taskstats version 1 Apr 12 18:22:28.732933 kernel: Loading compiled-in X.509 certificates Apr 12 18:22:28.732940 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 8c258d82bbd8df4a9da2c0ea4108142f04be6b34' Apr 12 18:22:28.732946 kernel: Key type .fscrypt registered Apr 12 18:22:28.732953 kernel: Key type fscrypt-provisioning registered Apr 12 18:22:28.732960 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:22:28.732971 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:22:28.732978 kernel: ima: No architecture policies found Apr 12 18:22:28.732984 kernel: Freeing unused kernel memory: 36352K Apr 12 18:22:28.732992 kernel: Run /init as init process Apr 12 18:22:28.732998 kernel: with arguments: Apr 12 18:22:28.733005 kernel: /init Apr 12 18:22:28.733011 kernel: with environment: Apr 12 18:22:28.733017 kernel: HOME=/ Apr 12 18:22:28.733023 kernel: TERM=linux Apr 12 18:22:28.733030 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:22:28.733038 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:22:28.733046 systemd[1]: Detected virtualization kvm. Apr 12 18:22:28.733054 systemd[1]: Detected architecture arm64. Apr 12 18:22:28.733061 systemd[1]: Running in initrd. Apr 12 18:22:28.733068 systemd[1]: No hostname configured, using default hostname. Apr 12 18:22:28.733075 systemd[1]: Hostname set to . Apr 12 18:22:28.733082 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:22:28.733089 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:22:28.733095 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:22:28.733102 systemd[1]: Reached target cryptsetup.target. Apr 12 18:22:28.733110 systemd[1]: Reached target paths.target. Apr 12 18:22:28.733117 systemd[1]: Reached target slices.target. Apr 12 18:22:28.733123 systemd[1]: Reached target swap.target. Apr 12 18:22:28.733130 systemd[1]: Reached target timers.target. Apr 12 18:22:28.733137 systemd[1]: Listening on iscsid.socket. Apr 12 18:22:28.733144 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:22:28.733151 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:22:28.733160 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:22:28.733166 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:22:28.733173 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:22:28.733180 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:22:28.733187 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:22:28.733194 systemd[1]: Reached target sockets.target. Apr 12 18:22:28.733200 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:22:28.733207 systemd[1]: Finished network-cleanup.service. Apr 12 18:22:28.733214 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:22:28.733222 systemd[1]: Starting systemd-journald.service... Apr 12 18:22:28.733229 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:22:28.733236 systemd[1]: Starting systemd-resolved.service... Apr 12 18:22:28.733243 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:22:28.733249 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:22:28.733256 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:22:28.733263 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:22:28.733270 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:22:28.733277 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:22:28.733284 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:22:28.733292 kernel: audit: type=1130 audit(1712946148.728:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.733302 systemd-journald[250]: Journal started Apr 12 18:22:28.733336 systemd-journald[250]: Runtime Journal (/run/log/journal/4df489ad03284b04ac0ffa3aeac4f7a9) is 6.0M, max 48.7M, 42.6M free. Apr 12 18:22:28.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.722828 systemd-modules-load[251]: Inserted module 'overlay' Apr 12 18:22:28.734826 systemd[1]: Started systemd-journald.service. Apr 12 18:22:28.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.737665 kernel: audit: type=1130 audit(1712946148.734:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.741640 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:22:28.745415 systemd-modules-load[251]: Inserted module 'br_netfilter' Apr 12 18:22:28.746170 kernel: Bridge firewalling registered Apr 12 18:22:28.747459 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:22:28.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.753542 kernel: audit: type=1130 audit(1712946148.748:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.751287 systemd-resolved[252]: Positive Trust Anchors: Apr 12 18:22:28.751301 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:22:28.751328 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:22:28.751404 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:22:28.765010 kernel: SCSI subsystem initialized Apr 12 18:22:28.765027 kernel: audit: type=1130 audit(1712946148.761:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.765063 dracut-cmdline[268]: dracut-dracut-053 Apr 12 18:22:28.755446 systemd-resolved[252]: Defaulting to hostname 'linux'. Apr 12 18:22:28.768839 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:22:28.768856 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:22:28.768865 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:22:28.756279 systemd[1]: Started systemd-resolved.service. Apr 12 18:22:28.770283 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:22:28.763032 systemd[1]: Reached target nss-lookup.target. Apr 12 18:22:28.775911 systemd-modules-load[251]: Inserted module 'dm_multipath' Apr 12 18:22:28.777186 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:22:28.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.778502 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:22:28.782399 kernel: audit: type=1130 audit(1712946148.777:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.785526 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:22:28.788659 kernel: audit: type=1130 audit(1712946148.785:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.841649 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:22:28.853660 kernel: iscsi: registered transport (tcp) Apr 12 18:22:28.871676 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:22:28.871712 kernel: QLogic iSCSI HBA Driver Apr 12 18:22:28.905263 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:22:28.909081 kernel: audit: type=1130 audit(1712946148.905:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:28.906685 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:22:28.958979 kernel: raid6: neonx8 gen() 13543 MB/s Apr 12 18:22:28.978399 kernel: raid6: neonx8 xor() 10354 MB/s Apr 12 18:22:28.996977 kernel: raid6: neonx4 gen() 12351 MB/s Apr 12 18:22:29.011375 kernel: raid6: neonx4 xor() 10296 MB/s Apr 12 18:22:29.027662 kernel: raid6: neonx2 gen() 12260 MB/s Apr 12 18:22:29.044659 kernel: raid6: neonx2 xor() 10270 MB/s Apr 12 18:22:29.061666 kernel: raid6: neonx1 gen() 10428 MB/s Apr 12 18:22:29.078657 kernel: raid6: neonx1 xor() 8696 MB/s Apr 12 18:22:29.095689 kernel: raid6: int64x8 gen() 6149 MB/s Apr 12 18:22:29.112659 kernel: raid6: int64x8 xor() 3539 MB/s Apr 12 18:22:29.129667 kernel: raid6: int64x4 gen() 7209 MB/s Apr 12 18:22:29.146661 kernel: raid6: int64x4 xor() 3857 MB/s Apr 12 18:22:29.163664 kernel: raid6: int64x2 gen() 6153 MB/s Apr 12 18:22:29.180670 kernel: raid6: int64x2 xor() 3314 MB/s Apr 12 18:22:29.197668 kernel: raid6: int64x1 gen() 5046 MB/s Apr 12 18:22:29.215034 kernel: raid6: int64x1 xor() 2646 MB/s Apr 12 18:22:29.215068 kernel: raid6: using algorithm neonx8 gen() 13543 MB/s Apr 12 18:22:29.215077 kernel: raid6: .... xor() 10354 MB/s, rmw enabled Apr 12 18:22:29.215086 kernel: raid6: using neon recovery algorithm Apr 12 18:22:29.225652 kernel: xor: measuring software checksum speed Apr 12 18:22:29.225681 kernel: 8regs : 17300 MB/sec Apr 12 18:22:29.226995 kernel: 32regs : 20765 MB/sec Apr 12 18:22:29.227753 kernel: arm64_neon : 27892 MB/sec Apr 12 18:22:29.227763 kernel: xor: using function: arm64_neon (27892 MB/sec) Apr 12 18:22:29.283755 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Apr 12 18:22:29.293830 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:22:29.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:29.296000 audit: BPF prog-id=7 op=LOAD Apr 12 18:22:29.296000 audit: BPF prog-id=8 op=LOAD Apr 12 18:22:29.297658 kernel: audit: type=1130 audit(1712946149.293:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:29.297675 kernel: audit: type=1334 audit(1712946149.296:10): prog-id=7 op=LOAD Apr 12 18:22:29.298005 systemd[1]: Starting systemd-udevd.service... Apr 12 18:22:29.311606 systemd-udevd[451]: Using default interface naming scheme 'v252'. Apr 12 18:22:29.314870 systemd[1]: Started systemd-udevd.service. Apr 12 18:22:29.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:29.316582 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:22:29.327956 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Apr 12 18:22:29.355551 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:22:29.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:29.356985 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:22:29.391314 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:22:29.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:29.420047 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 12 18:22:29.422687 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 18:22:29.422712 kernel: GPT:9289727 != 19775487 Apr 12 18:22:29.422720 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 18:22:29.422729 kernel: GPT:9289727 != 19775487 Apr 12 18:22:29.423979 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 18:22:29.423993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:22:29.438652 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (508) Apr 12 18:22:29.442553 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:22:29.447778 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:22:29.450503 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:22:29.451407 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:22:29.455767 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:22:29.457260 systemd[1]: Starting disk-uuid.service... Apr 12 18:22:29.463146 disk-uuid[522]: Primary Header is updated. Apr 12 18:22:29.463146 disk-uuid[522]: Secondary Entries is updated. Apr 12 18:22:29.463146 disk-uuid[522]: Secondary Header is updated. Apr 12 18:22:29.466652 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:22:29.476652 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:22:30.476625 disk-uuid[523]: The operation has completed successfully. Apr 12 18:22:30.477647 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:22:30.499882 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:22:30.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.499981 systemd[1]: Finished disk-uuid.service. Apr 12 18:22:30.501540 systemd[1]: Starting verity-setup.service... Apr 12 18:22:30.516646 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 12 18:22:30.543580 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:22:30.545719 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:22:30.547621 systemd[1]: Finished verity-setup.service. Apr 12 18:22:30.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.598657 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:22:30.599685 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:22:30.600428 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:22:30.601244 systemd[1]: Starting ignition-setup.service... Apr 12 18:22:30.603288 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:22:30.609739 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:22:30.609772 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:22:30.609785 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:22:30.617735 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:22:30.624310 systemd[1]: Finished ignition-setup.service. Apr 12 18:22:30.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.625888 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:22:30.696246 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:22:30.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.697000 audit: BPF prog-id=9 op=LOAD Apr 12 18:22:30.698240 systemd[1]: Starting systemd-networkd.service... Apr 12 18:22:30.704893 ignition[608]: Ignition 2.14.0 Apr 12 18:22:30.705580 ignition[608]: Stage: fetch-offline Apr 12 18:22:30.706229 ignition[608]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:22:30.707036 ignition[608]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:22:30.708114 ignition[608]: parsed url from cmdline: "" Apr 12 18:22:30.708178 ignition[608]: no config URL provided Apr 12 18:22:30.708801 ignition[608]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:22:30.709675 ignition[608]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:22:30.710425 ignition[608]: op(1): [started] loading QEMU firmware config module Apr 12 18:22:30.711303 ignition[608]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 12 18:22:30.714731 ignition[608]: op(1): [finished] loading QEMU firmware config module Apr 12 18:22:30.720719 systemd-networkd[699]: lo: Link UP Apr 12 18:22:30.720732 systemd-networkd[699]: lo: Gained carrier Apr 12 18:22:30.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.721099 systemd-networkd[699]: Enumeration completed Apr 12 18:22:30.721183 systemd[1]: Started systemd-networkd.service. Apr 12 18:22:30.721276 systemd-networkd[699]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:22:30.722291 systemd-networkd[699]: eth0: Link UP Apr 12 18:22:30.722294 systemd-networkd[699]: eth0: Gained carrier Apr 12 18:22:30.722296 systemd[1]: Reached target network.target. Apr 12 18:22:30.723979 systemd[1]: Starting iscsiuio.service... Apr 12 18:22:30.732805 systemd[1]: Started iscsiuio.service. Apr 12 18:22:30.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.734196 systemd[1]: Starting iscsid.service... Apr 12 18:22:30.737371 iscsid[706]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:22:30.737371 iscsid[706]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:22:30.737371 iscsid[706]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:22:30.737371 iscsid[706]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:22:30.737371 iscsid[706]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:22:30.737371 iscsid[706]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:22:30.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.743771 systemd[1]: Started iscsid.service. Apr 12 18:22:30.745272 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:22:30.749309 systemd-networkd[699]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:22:30.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.755381 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:22:30.756282 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:22:30.757111 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:22:30.757931 systemd[1]: Reached target remote-fs.target. Apr 12 18:22:30.759315 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:22:30.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.766738 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:22:30.806995 ignition[608]: parsing config with SHA512: 22697390cac0efcb59ffa0a54e2590301f085d20d638ffc3c90b54d34e3529bf798a2ae481fc393517f74c79f87fdb9d689a431bfd7a00e4b76f216b0cbdb65a Apr 12 18:22:30.856148 unknown[608]: fetched base config from "system" Apr 12 18:22:30.856163 unknown[608]: fetched user config from "qemu" Apr 12 18:22:30.856829 ignition[608]: fetch-offline: fetch-offline passed Apr 12 18:22:30.856942 systemd-resolved[252]: Detected conflict on linux IN A 10.0.0.54 Apr 12 18:22:30.856895 ignition[608]: Ignition finished successfully Apr 12 18:22:30.856951 systemd-resolved[252]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Apr 12 18:22:30.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.859868 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:22:30.860766 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 12 18:22:30.861502 systemd[1]: Starting ignition-kargs.service... Apr 12 18:22:30.870163 ignition[721]: Ignition 2.14.0 Apr 12 18:22:30.870171 ignition[721]: Stage: kargs Apr 12 18:22:30.870257 ignition[721]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:22:30.870266 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:22:30.871339 ignition[721]: kargs: kargs passed Apr 12 18:22:30.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.872334 systemd[1]: Finished ignition-kargs.service. Apr 12 18:22:30.871379 ignition[721]: Ignition finished successfully Apr 12 18:22:30.874384 systemd[1]: Starting ignition-disks.service... Apr 12 18:22:30.880887 ignition[727]: Ignition 2.14.0 Apr 12 18:22:30.880897 ignition[727]: Stage: disks Apr 12 18:22:30.881001 ignition[727]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:22:30.881010 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:22:30.883686 systemd[1]: Finished ignition-disks.service. Apr 12 18:22:30.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.882116 ignition[727]: disks: disks passed Apr 12 18:22:30.885257 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:22:30.882163 ignition[727]: Ignition finished successfully Apr 12 18:22:30.886429 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:22:30.887586 systemd[1]: Reached target local-fs.target. Apr 12 18:22:30.888826 systemd[1]: Reached target sysinit.target. Apr 12 18:22:30.889961 systemd[1]: Reached target basic.target. Apr 12 18:22:30.892008 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:22:30.902471 systemd-fsck[735]: ROOT: clean, 612/553520 files, 56018/553472 blocks Apr 12 18:22:30.905714 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:22:30.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.907472 systemd[1]: Mounting sysroot.mount... Apr 12 18:22:30.913643 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:22:30.913981 systemd[1]: Mounted sysroot.mount. Apr 12 18:22:30.914658 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:22:30.916711 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:22:30.917521 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Apr 12 18:22:30.917559 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:22:30.917582 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:22:30.919191 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:22:30.920949 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:22:30.925165 initrd-setup-root[745]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:22:30.929916 initrd-setup-root[753]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:22:30.933738 initrd-setup-root[761]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:22:30.937792 initrd-setup-root[769]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:22:30.963935 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:22:30.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.965410 systemd[1]: Starting ignition-mount.service... Apr 12 18:22:30.966694 systemd[1]: Starting sysroot-boot.service... Apr 12 18:22:30.971651 bash[786]: umount: /sysroot/usr/share/oem: not mounted. Apr 12 18:22:30.979817 ignition[788]: INFO : Ignition 2.14.0 Apr 12 18:22:30.979817 ignition[788]: INFO : Stage: mount Apr 12 18:22:30.981174 ignition[788]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:22:30.981174 ignition[788]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:22:30.981174 ignition[788]: INFO : mount: mount passed Apr 12 18:22:30.981174 ignition[788]: INFO : Ignition finished successfully Apr 12 18:22:30.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:30.982223 systemd[1]: Finished ignition-mount.service. Apr 12 18:22:30.985184 systemd[1]: Finished sysroot-boot.service. Apr 12 18:22:30.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:31.554091 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:22:31.560708 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (796) Apr 12 18:22:31.560734 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:22:31.560745 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:22:31.561746 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:22:31.564432 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:22:31.565864 systemd[1]: Starting ignition-files.service... Apr 12 18:22:31.579529 ignition[816]: INFO : Ignition 2.14.0 Apr 12 18:22:31.579529 ignition[816]: INFO : Stage: files Apr 12 18:22:31.580854 ignition[816]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:22:31.580854 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:22:31.580854 ignition[816]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:22:31.585052 ignition[816]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:22:31.585052 ignition[816]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:22:31.588731 ignition[816]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:22:31.589835 ignition[816]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:22:31.589835 ignition[816]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:22:31.589438 unknown[816]: wrote ssh authorized keys file for user: core Apr 12 18:22:31.592925 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:22:31.592925 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 12 18:22:31.651761 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 18:22:31.708101 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:22:31.709847 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:22:31.709847 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Apr 12 18:22:31.819978 systemd-networkd[699]: eth0: Gained IPv6LL Apr 12 18:22:31.976694 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 18:22:32.171007 ignition[816]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Apr 12 18:22:32.173412 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:22:32.173412 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:22:32.173412 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Apr 12 18:22:32.347761 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 12 18:22:32.585147 ignition[816]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Apr 12 18:22:32.587532 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:22:32.589247 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:22:32.589247 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:22:32.589247 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:22:32.589247 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.29.2/bin/linux/arm64/kubectl: attempt #1 Apr 12 18:22:32.637883 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 12 18:22:33.064982 ignition[816]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: b303598f3a65bbc366a7bfb4632d3b5cdd2d41b8a7973de80a99f8b1bb058299b57dc39b17a53eb7a54f1a0479ae4e2093fec675f1baff4613e14e0ed9d65c21 Apr 12 18:22:33.064982 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:22:33.064982 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:22:33.069903 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.29.2/bin/linux/arm64/kubelet: attempt #1 Apr 12 18:22:33.087441 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Apr 12 18:22:33.704450 ignition[816]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ded47d757fac0279b1b784756fb54b3a5cb0180ce45833838b00d6d7c87578a985e4627503dd7ff734e5f577cf4752ae7daaa2b68e5934fd4617ea15e995f91b Apr 12 18:22:33.707213 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:22:33.707213 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:22:33.707213 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.29.2/bin/linux/arm64/kubeadm: attempt #1 Apr 12 18:22:33.726826 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Apr 12 18:22:33.997782 ignition[816]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 3e6beeb7794aa002604f0be43af0255e707846760508ebe98006ec72ae8d7a7cf2c14fd52bbcc5084f0e9366b992dc64341b1da646f1ce6e937fb762f880dc15 Apr 12 18:22:34.000166 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:22:34.000166 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:22:34.000166 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 12 18:22:34.218916 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 12 18:22:34.261245 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:22:34.262852 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:22:34.262852 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:22:34.262852 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:22:34.262852 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:22:34.262852 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:22:34.262852 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:22:34.262852 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:22:34.262852 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:22:34.262852 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:22:34.262852 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:22:34.262852 ignition[816]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 12 18:22:34.262852 ignition[816]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:22:34.262852 ignition[816]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:22:34.262852 ignition[816]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 12 18:22:34.262852 ignition[816]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:22:34.262852 ignition[816]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:22:34.262852 ignition[816]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Apr 12 18:22:34.290391 ignition[816]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:22:34.322391 kernel: kauditd_printk_skb: 22 callbacks suppressed Apr 12 18:22:34.322412 kernel: audit: type=1130 audit(1712946154.295:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.322423 kernel: audit: type=1130 audit(1712946154.304:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.322437 kernel: audit: type=1131 audit(1712946154.304:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.322447 kernel: audit: type=1130 audit(1712946154.310:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.322552 ignition[816]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:22:34.322552 ignition[816]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Apr 12 18:22:34.322552 ignition[816]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:22:34.322552 ignition[816]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:22:34.322552 ignition[816]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:22:34.322552 ignition[816]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:22:34.322552 ignition[816]: INFO : files: files passed Apr 12 18:22:34.322552 ignition[816]: INFO : Ignition finished successfully Apr 12 18:22:34.337380 kernel: audit: type=1130 audit(1712946154.328:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.337400 kernel: audit: type=1131 audit(1712946154.328:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.293718 systemd[1]: Finished ignition-files.service. Apr 12 18:22:34.296193 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:22:34.339433 initrd-setup-root-after-ignition[841]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Apr 12 18:22:34.297576 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:22:34.342355 initrd-setup-root-after-ignition[843]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:22:34.298202 systemd[1]: Starting ignition-quench.service... Apr 12 18:22:34.302779 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:22:34.302853 systemd[1]: Finished ignition-quench.service. Apr 12 18:22:34.304920 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:22:34.311055 systemd[1]: Reached target ignition-complete.target. Apr 12 18:22:34.315319 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:22:34.327108 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:22:34.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.327185 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:22:34.328188 systemd[1]: Reached target initrd-fs.target. Apr 12 18:22:34.334250 systemd[1]: Reached target initrd.target. Apr 12 18:22:34.336162 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:22:34.355687 kernel: audit: type=1130 audit(1712946154.347:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.336827 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:22:34.346625 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:22:34.348554 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:22:34.356725 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:22:34.357721 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:22:34.358900 systemd[1]: Stopped target timers.target. Apr 12 18:22:34.359964 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:22:34.360072 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:22:34.361280 systemd[1]: Stopped target initrd.target. Apr 12 18:22:34.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.362422 systemd[1]: Stopped target basic.target. Apr 12 18:22:34.368547 kernel: audit: type=1131 audit(1712946154.361:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.363749 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:22:34.366957 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:22:34.368135 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:22:34.369330 systemd[1]: Stopped target remote-fs.target. Apr 12 18:22:34.370517 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:22:34.371791 systemd[1]: Stopped target sysinit.target. Apr 12 18:22:34.372986 systemd[1]: Stopped target local-fs.target. Apr 12 18:22:34.374121 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:22:34.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.375323 systemd[1]: Stopped target swap.target. Apr 12 18:22:34.381153 kernel: audit: type=1131 audit(1712946154.376:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.376356 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:22:34.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.376470 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:22:34.385736 kernel: audit: type=1131 audit(1712946154.381:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.377617 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:22:34.380699 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:22:34.380812 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:22:34.381933 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:22:34.382041 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:22:34.385355 systemd[1]: Stopped target paths.target. Apr 12 18:22:34.386354 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:22:34.390658 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:22:34.392169 systemd[1]: Stopped target slices.target. Apr 12 18:22:34.393327 systemd[1]: Stopped target sockets.target. Apr 12 18:22:34.394424 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:22:34.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.394540 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:22:34.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.395719 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:22:34.395814 systemd[1]: Stopped ignition-files.service. Apr 12 18:22:34.399893 iscsid[706]: iscsid shutting down. Apr 12 18:22:34.398005 systemd[1]: Stopping ignition-mount.service... Apr 12 18:22:34.399384 systemd[1]: Stopping iscsid.service... Apr 12 18:22:34.401120 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:22:34.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.402089 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:22:34.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.402235 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:22:34.403426 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:22:34.403526 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:22:34.406001 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 18:22:34.406156 systemd[1]: Stopped iscsid.service. Apr 12 18:22:34.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.407682 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:22:34.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.407749 systemd[1]: Closed iscsid.socket. Apr 12 18:22:34.408437 systemd[1]: Stopping iscsiuio.service... Apr 12 18:22:34.412591 ignition[856]: INFO : Ignition 2.14.0 Apr 12 18:22:34.412591 ignition[856]: INFO : Stage: umount Apr 12 18:22:34.412591 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:22:34.412591 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:22:34.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.409843 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:22:34.418707 ignition[856]: INFO : umount: umount passed Apr 12 18:22:34.418707 ignition[856]: INFO : Ignition finished successfully Apr 12 18:22:34.409931 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:22:34.413464 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:22:34.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.413547 systemd[1]: Stopped iscsiuio.service. Apr 12 18:22:34.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.414995 systemd[1]: Stopped target network.target. Apr 12 18:22:34.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.416350 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:22:34.416386 systemd[1]: Closed iscsiuio.socket. Apr 12 18:22:34.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.418086 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:22:34.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.419313 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:22:34.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.421093 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:22:34.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.421500 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:22:34.421585 systemd[1]: Stopped ignition-mount.service. Apr 12 18:22:34.433000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:22:34.422585 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:22:34.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.422665 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:22:34.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.423705 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:22:34.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.423745 systemd[1]: Stopped ignition-disks.service. Apr 12 18:22:34.424808 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:22:34.424847 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:22:34.425994 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:22:34.426030 systemd[1]: Stopped ignition-setup.service. Apr 12 18:22:34.427119 systemd-networkd[699]: eth0: DHCPv6 lease lost Apr 12 18:22:34.443000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:22:34.427218 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:22:34.427254 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:22:34.428518 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:22:34.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.428606 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:22:34.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.429989 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:22:34.430072 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:22:34.431065 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:22:34.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.431090 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:22:34.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.432832 systemd[1]: Stopping network-cleanup.service... Apr 12 18:22:34.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.433438 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:22:34.433493 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:22:34.434769 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:22:34.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.434809 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:22:34.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.436784 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:22:34.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.436823 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:22:34.440988 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:22:34.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.442745 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:22:34.445611 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:22:34.445979 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:22:34.447012 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:22:34.447091 systemd[1]: Stopped network-cleanup.service. Apr 12 18:22:34.448183 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:22:34.448214 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:22:34.449366 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:22:34.449395 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:22:34.450661 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:22:34.450706 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:22:34.451888 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:22:34.451928 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:22:34.453202 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:22:34.453241 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:22:34.455013 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:22:34.456301 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 12 18:22:34.456367 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Apr 12 18:22:34.458304 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:22:34.458344 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:22:34.459075 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:22:34.459113 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:22:34.461102 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 12 18:22:34.461487 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:22:34.461566 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:22:34.462591 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:22:34.464360 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:22:34.470288 systemd[1]: Switching root. Apr 12 18:22:34.486895 systemd-journald[250]: Journal stopped Apr 12 18:22:36.540398 systemd-journald[250]: Received SIGTERM from PID 1 (n/a). Apr 12 18:22:36.540459 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:22:36.540476 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:22:36.540486 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:22:36.540496 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:22:36.540505 kernel: SELinux: policy capability open_perms=1 Apr 12 18:22:36.540514 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:22:36.540524 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:22:36.540533 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:22:36.540543 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:22:36.540552 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:22:36.540563 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:22:36.540577 systemd[1]: Successfully loaded SELinux policy in 32.619ms. Apr 12 18:22:36.540596 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.104ms. Apr 12 18:22:36.540608 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:22:36.540620 systemd[1]: Detected virtualization kvm. Apr 12 18:22:36.540654 systemd[1]: Detected architecture arm64. Apr 12 18:22:36.540665 systemd[1]: Detected first boot. Apr 12 18:22:36.540675 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:22:36.540687 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:22:36.540698 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:22:36.540708 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:22:36.540719 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:22:36.540731 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:22:36.540742 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 18:22:36.540753 systemd[1]: Stopped initrd-switch-root.service. Apr 12 18:22:36.540764 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 18:22:36.540774 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:22:36.540788 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:22:36.540800 systemd[1]: Created slice system-getty.slice. Apr 12 18:22:36.540814 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:22:36.540824 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:22:36.540835 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:22:36.540846 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:22:36.540857 systemd[1]: Created slice user.slice. Apr 12 18:22:36.540868 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:22:36.540879 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:22:36.540889 systemd[1]: Set up automount boot.automount. Apr 12 18:22:36.540899 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:22:36.540909 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 18:22:36.540920 systemd[1]: Stopped target initrd-fs.target. Apr 12 18:22:36.540930 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 18:22:36.540949 systemd[1]: Reached target integritysetup.target. Apr 12 18:22:36.540962 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:22:36.540972 systemd[1]: Reached target remote-fs.target. Apr 12 18:22:36.540982 systemd[1]: Reached target slices.target. Apr 12 18:22:36.540992 systemd[1]: Reached target swap.target. Apr 12 18:22:36.541003 systemd[1]: Reached target torcx.target. Apr 12 18:22:36.541013 systemd[1]: Reached target veritysetup.target. Apr 12 18:22:36.541023 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:22:36.541034 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:22:36.541046 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:22:36.541056 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:22:36.541067 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:22:36.541077 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:22:36.541087 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:22:36.541098 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:22:36.541108 systemd[1]: Mounting media.mount... Apr 12 18:22:36.541118 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:22:36.541128 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:22:36.541138 systemd[1]: Mounting tmp.mount... Apr 12 18:22:36.541150 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:22:36.541160 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:22:36.541171 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:22:36.541181 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:22:36.541192 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:22:36.541202 systemd[1]: Starting modprobe@drm.service... Apr 12 18:22:36.541213 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:22:36.541244 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:22:36.541256 systemd[1]: Starting modprobe@loop.service... Apr 12 18:22:36.541268 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:22:36.541280 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 18:22:36.541290 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 18:22:36.541300 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 18:22:36.541311 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 18:22:36.541326 systemd[1]: Stopped systemd-journald.service. Apr 12 18:22:36.541336 kernel: loop: module loaded Apr 12 18:22:36.541348 kernel: fuse: init (API version 7.34) Apr 12 18:22:36.541360 systemd[1]: Starting systemd-journald.service... Apr 12 18:22:36.541370 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:22:36.541380 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:22:36.541392 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:22:36.541402 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:22:36.541413 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 18:22:36.541423 systemd[1]: Stopped verity-setup.service. Apr 12 18:22:36.541433 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:22:36.541443 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:22:36.541454 systemd[1]: Mounted media.mount. Apr 12 18:22:36.541464 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:22:36.541474 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:22:36.541486 systemd[1]: Mounted tmp.mount. Apr 12 18:22:36.541496 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:22:36.541510 systemd-journald[963]: Journal started Apr 12 18:22:36.541552 systemd-journald[963]: Runtime Journal (/run/log/journal/4df489ad03284b04ac0ffa3aeac4f7a9) is 6.0M, max 48.7M, 42.6M free. Apr 12 18:22:34.551000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 18:22:36.543387 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:22:34.717000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:22:34.717000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:22:34.717000 audit: BPF prog-id=10 op=LOAD Apr 12 18:22:34.717000 audit: BPF prog-id=10 op=UNLOAD Apr 12 18:22:34.717000 audit: BPF prog-id=11 op=LOAD Apr 12 18:22:34.717000 audit: BPF prog-id=11 op=UNLOAD Apr 12 18:22:34.754000 audit[889]: AVC avc: denied { associate } for pid=889 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:22:34.754000 audit[889]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58ac a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=872 pid=889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:22:34.754000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:22:34.756000 audit[889]: AVC avc: denied { associate } for pid=889 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:22:34.756000 audit[889]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5989 a2=1ed a3=0 items=2 ppid=872 pid=889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:22:34.756000 audit: CWD cwd="/" Apr 12 18:22:34.756000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:22:34.756000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:22:34.756000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:22:36.424000 audit: BPF prog-id=12 op=LOAD Apr 12 18:22:36.424000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:22:36.424000 audit: BPF prog-id=13 op=LOAD Apr 12 18:22:36.424000 audit: BPF prog-id=14 op=LOAD Apr 12 18:22:36.424000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:22:36.424000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:22:36.425000 audit: BPF prog-id=15 op=LOAD Apr 12 18:22:36.425000 audit: BPF prog-id=12 op=UNLOAD Apr 12 18:22:36.425000 audit: BPF prog-id=16 op=LOAD Apr 12 18:22:36.425000 audit: BPF prog-id=17 op=LOAD Apr 12 18:22:36.425000 audit: BPF prog-id=13 op=UNLOAD Apr 12 18:22:36.425000 audit: BPF prog-id=14 op=UNLOAD Apr 12 18:22:36.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.435000 audit: BPF prog-id=15 op=UNLOAD Apr 12 18:22:36.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.516000 audit: BPF prog-id=18 op=LOAD Apr 12 18:22:36.544166 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:22:36.516000 audit: BPF prog-id=19 op=LOAD Apr 12 18:22:36.516000 audit: BPF prog-id=20 op=LOAD Apr 12 18:22:36.516000 audit: BPF prog-id=16 op=UNLOAD Apr 12 18:22:36.516000 audit: BPF prog-id=17 op=UNLOAD Apr 12 18:22:36.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.539000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:22:36.539000 audit[963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff980dcb0 a2=4000 a3=1 items=0 ppid=1 pid=963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:22:36.539000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:22:36.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.753279 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:22:36.423339 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:22:34.753540 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:22:36.423353 systemd[1]: Unnecessary job was removed for dev-vda6.device. Apr 12 18:22:34.753559 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:22:36.426436 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 18:22:34.753589 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 18:22:36.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:34.753599 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 18:22:34.753646 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 18:22:34.753660 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 18:22:34.753856 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 18:22:34.753892 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:22:34.753904 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:22:34.754757 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 18:22:34.754795 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 18:22:34.754814 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 18:22:34.754828 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 18:22:34.754847 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 18:22:34.754860 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 18:22:36.178750 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:36Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:22:36.179018 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:36Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:22:36.179116 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:36Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:22:36.179270 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:36Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:22:36.179318 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:36Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 18:22:36.179374 /usr/lib/systemd/system-generators/torcx-generator[889]: time="2024-04-12T18:22:36Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 18:22:36.546680 systemd[1]: Started systemd-journald.service. Apr 12 18:22:36.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.546840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:22:36.547010 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:22:36.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.548089 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:22:36.548240 systemd[1]: Finished modprobe@drm.service. Apr 12 18:22:36.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.549179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:22:36.549323 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:22:36.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.550294 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:22:36.550443 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:22:36.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.551411 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:22:36.551576 systemd[1]: Finished modprobe@loop.service. Apr 12 18:22:36.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.552723 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:22:36.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.553708 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:22:36.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.554895 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:22:36.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.556039 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:22:36.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.557351 systemd[1]: Reached target network-pre.target. Apr 12 18:22:36.559204 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:22:36.560958 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:22:36.561580 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:22:36.563073 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:22:36.564896 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:22:36.565675 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:22:36.566769 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:22:36.567500 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:22:36.568690 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:22:36.571160 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:22:36.574540 systemd-journald[963]: Time spent on flushing to /var/log/journal/4df489ad03284b04ac0ffa3aeac4f7a9 is 14.257ms for 1034 entries. Apr 12 18:22:36.574540 systemd-journald[963]: System Journal (/var/log/journal/4df489ad03284b04ac0ffa3aeac4f7a9) is 8.0M, max 195.6M, 187.6M free. Apr 12 18:22:36.600038 systemd-journald[963]: Received client request to flush runtime journal. Apr 12 18:22:36.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.575836 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:22:36.601075 udevadm[990]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 12 18:22:36.576811 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:22:36.580516 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:22:36.582326 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:22:36.584957 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:22:36.589923 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:22:36.590924 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:22:36.596384 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:22:36.598271 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:22:36.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.600872 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:22:36.615591 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:22:36.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.941457 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:22:36.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.942000 audit: BPF prog-id=21 op=LOAD Apr 12 18:22:36.942000 audit: BPF prog-id=22 op=LOAD Apr 12 18:22:36.942000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:22:36.942000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:22:36.943524 systemd[1]: Starting systemd-udevd.service... Apr 12 18:22:36.963822 systemd-udevd[994]: Using default interface naming scheme 'v252'. Apr 12 18:22:36.975915 systemd[1]: Started systemd-udevd.service. Apr 12 18:22:36.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:36.976000 audit: BPF prog-id=23 op=LOAD Apr 12 18:22:36.978352 systemd[1]: Starting systemd-networkd.service... Apr 12 18:22:36.985000 audit: BPF prog-id=24 op=LOAD Apr 12 18:22:36.985000 audit: BPF prog-id=25 op=LOAD Apr 12 18:22:36.985000 audit: BPF prog-id=26 op=LOAD Apr 12 18:22:36.987319 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:22:37.009845 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Apr 12 18:22:37.023749 systemd[1]: Started systemd-userdbd.service. Apr 12 18:22:37.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.033348 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:22:37.079443 systemd-networkd[1001]: lo: Link UP Apr 12 18:22:37.079772 systemd-networkd[1001]: lo: Gained carrier Apr 12 18:22:37.080151 systemd-networkd[1001]: Enumeration completed Apr 12 18:22:37.080258 systemd[1]: Started systemd-networkd.service. Apr 12 18:22:37.080259 systemd-networkd[1001]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:22:37.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.083141 systemd-networkd[1001]: eth0: Link UP Apr 12 18:22:37.083149 systemd-networkd[1001]: eth0: Gained carrier Apr 12 18:22:37.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.095071 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:22:37.097114 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:22:37.103831 systemd-networkd[1001]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:22:37.112068 lvm[1027]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:22:37.142500 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:22:37.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.143415 systemd[1]: Reached target cryptsetup.target. Apr 12 18:22:37.145248 systemd[1]: Starting lvm2-activation.service... Apr 12 18:22:37.148784 lvm[1028]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:22:37.173391 systemd[1]: Finished lvm2-activation.service. Apr 12 18:22:37.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.174302 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:22:37.175039 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:22:37.175072 systemd[1]: Reached target local-fs.target. Apr 12 18:22:37.175716 systemd[1]: Reached target machines.target. Apr 12 18:22:37.177464 systemd[1]: Starting ldconfig.service... Apr 12 18:22:37.178412 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:22:37.178465 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:22:37.179685 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:22:37.181598 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:22:37.183842 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:22:37.184838 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:22:37.184909 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:22:37.185954 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:22:37.186996 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1030 (bootctl) Apr 12 18:22:37.189240 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:22:37.192692 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:22:37.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.196348 systemd-tmpfiles[1033]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:22:37.198643 systemd-tmpfiles[1033]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:22:37.204785 systemd-tmpfiles[1033]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:22:37.318099 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:22:37.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.319223 systemd-fsck[1039]: fsck.fat 4.2 (2021-01-31) Apr 12 18:22:37.319223 systemd-fsck[1039]: /dev/vda1: 236 files, 117047/258078 clusters Apr 12 18:22:37.322111 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:22:37.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.403647 ldconfig[1029]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:22:37.406696 systemd[1]: Finished ldconfig.service. Apr 12 18:22:37.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.533099 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:22:37.534445 systemd[1]: Mounting boot.mount... Apr 12 18:22:37.540972 systemd[1]: Mounted boot.mount. Apr 12 18:22:37.547506 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:22:37.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.598026 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:22:37.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.599894 systemd[1]: Starting audit-rules.service... Apr 12 18:22:37.601427 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:22:37.603143 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:22:37.606000 audit: BPF prog-id=27 op=LOAD Apr 12 18:22:37.607097 systemd[1]: Starting systemd-resolved.service... Apr 12 18:22:37.607000 audit: BPF prog-id=28 op=LOAD Apr 12 18:22:37.609030 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:22:37.610697 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:22:37.612089 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:22:37.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.613165 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:22:37.618000 audit[1054]: SYSTEM_BOOT pid=1054 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.620797 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:22:37.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.623068 systemd[1]: Starting systemd-update-done.service... Apr 12 18:22:37.625616 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:22:37.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.629249 systemd[1]: Finished systemd-update-done.service. Apr 12 18:22:37.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:22:37.640000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:22:37.640000 audit[1064]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffccf2bc00 a2=420 a3=0 items=0 ppid=1043 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:22:37.640000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:22:37.641857 augenrules[1064]: No rules Apr 12 18:22:37.642609 systemd[1]: Finished audit-rules.service. Apr 12 18:22:37.653198 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:22:37.654200 systemd-timesyncd[1050]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 12 18:22:37.654254 systemd-timesyncd[1050]: Initial clock synchronization to Fri 2024-04-12 18:22:37.416976 UTC. Apr 12 18:22:37.654262 systemd[1]: Reached target time-set.target. Apr 12 18:22:37.657579 systemd-resolved[1047]: Positive Trust Anchors: Apr 12 18:22:37.657592 systemd-resolved[1047]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:22:37.657619 systemd-resolved[1047]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:22:37.668961 systemd-resolved[1047]: Defaulting to hostname 'linux'. Apr 12 18:22:37.670261 systemd[1]: Started systemd-resolved.service. Apr 12 18:22:37.671104 systemd[1]: Reached target network.target. Apr 12 18:22:37.671735 systemd[1]: Reached target nss-lookup.target. Apr 12 18:22:37.672374 systemd[1]: Reached target sysinit.target. Apr 12 18:22:37.673089 systemd[1]: Started motdgen.path. Apr 12 18:22:37.673679 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:22:37.674684 systemd[1]: Started logrotate.timer. Apr 12 18:22:37.675354 systemd[1]: Started mdadm.timer. Apr 12 18:22:37.675928 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:22:37.676602 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:22:37.676648 systemd[1]: Reached target paths.target. Apr 12 18:22:37.677236 systemd[1]: Reached target timers.target. Apr 12 18:22:37.678155 systemd[1]: Listening on dbus.socket. Apr 12 18:22:37.679641 systemd[1]: Starting docker.socket... Apr 12 18:22:37.682419 systemd[1]: Listening on sshd.socket. Apr 12 18:22:37.683148 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:22:37.683535 systemd[1]: Listening on docker.socket. Apr 12 18:22:37.684278 systemd[1]: Reached target sockets.target. Apr 12 18:22:37.684918 systemd[1]: Reached target basic.target. Apr 12 18:22:37.685551 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:22:37.685582 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:22:37.686446 systemd[1]: Starting containerd.service... Apr 12 18:22:37.687985 systemd[1]: Starting dbus.service... Apr 12 18:22:37.689558 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:22:37.691363 systemd[1]: Starting extend-filesystems.service... Apr 12 18:22:37.692157 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:22:37.693198 systemd[1]: Starting motdgen.service... Apr 12 18:22:37.694831 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:22:37.696702 jq[1074]: false Apr 12 18:22:37.698240 systemd[1]: Starting prepare-critools.service... Apr 12 18:22:37.700872 systemd[1]: Starting prepare-helm.service... Apr 12 18:22:37.702673 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:22:37.706134 extend-filesystems[1075]: Found vda Apr 12 18:22:37.706134 extend-filesystems[1075]: Found vda1 Apr 12 18:22:37.706134 extend-filesystems[1075]: Found vda2 Apr 12 18:22:37.706134 extend-filesystems[1075]: Found vda3 Apr 12 18:22:37.706134 extend-filesystems[1075]: Found usr Apr 12 18:22:37.706134 extend-filesystems[1075]: Found vda4 Apr 12 18:22:37.706134 extend-filesystems[1075]: Found vda6 Apr 12 18:22:37.706134 extend-filesystems[1075]: Found vda7 Apr 12 18:22:37.706134 extend-filesystems[1075]: Found vda9 Apr 12 18:22:37.706134 extend-filesystems[1075]: Checking size of /dev/vda9 Apr 12 18:22:37.706280 systemd[1]: Starting sshd-keygen.service... Apr 12 18:22:37.709279 systemd[1]: Starting systemd-logind.service... Apr 12 18:22:37.710331 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:22:37.710420 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:22:37.710843 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 18:22:37.732958 jq[1096]: true Apr 12 18:22:37.711478 systemd[1]: Starting update-engine.service... Apr 12 18:22:37.714303 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:22:37.733396 tar[1099]: ./ Apr 12 18:22:37.733396 tar[1099]: ./loopback Apr 12 18:22:37.716450 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:22:37.733742 tar[1100]: crictl Apr 12 18:22:37.716590 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:22:37.716854 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:22:37.734199 jq[1105]: true Apr 12 18:22:37.716985 systemd[1]: Finished motdgen.service. Apr 12 18:22:37.720970 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:22:37.721140 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:22:37.742502 tar[1101]: linux-arm64/helm Apr 12 18:22:37.737558 systemd[1]: Started dbus.service. Apr 12 18:22:37.737427 dbus-daemon[1073]: [system] SELinux support is enabled Apr 12 18:22:37.739820 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:22:37.739843 systemd[1]: Reached target system-config.target. Apr 12 18:22:37.740539 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:22:37.740556 systemd[1]: Reached target user-config.target. Apr 12 18:22:37.762903 extend-filesystems[1075]: Resized partition /dev/vda9 Apr 12 18:22:37.790945 extend-filesystems[1129]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 18:22:37.803274 tar[1099]: ./bandwidth Apr 12 18:22:37.810794 bash[1126]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:22:37.810852 systemd-logind[1091]: Watching system buttons on /dev/input/event0 (Power Button) Apr 12 18:22:37.811042 systemd-logind[1091]: New seat seat0. Apr 12 18:22:37.820202 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:22:37.821693 systemd[1]: Started systemd-logind.service. Apr 12 18:22:37.826655 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 12 18:22:37.852650 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 12 18:22:37.864865 update_engine[1094]: I0412 18:22:37.855168 1094 main.cc:92] Flatcar Update Engine starting Apr 12 18:22:37.864865 update_engine[1094]: I0412 18:22:37.861191 1094 update_check_scheduler.cc:74] Next update check in 4m59s Apr 12 18:22:37.861167 systemd[1]: Started update-engine.service. Apr 12 18:22:37.863889 systemd[1]: Started locksmithd.service. Apr 12 18:22:37.865363 extend-filesystems[1129]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 12 18:22:37.865363 extend-filesystems[1129]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 12 18:22:37.865363 extend-filesystems[1129]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 12 18:22:37.869642 extend-filesystems[1075]: Resized filesystem in /dev/vda9 Apr 12 18:22:37.866598 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:22:37.866783 systemd[1]: Finished extend-filesystems.service. Apr 12 18:22:37.885363 env[1106]: time="2024-04-12T18:22:37.885260840Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:22:37.886340 tar[1099]: ./ptp Apr 12 18:22:37.916266 env[1106]: time="2024-04-12T18:22:37.916216440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:22:37.916395 env[1106]: time="2024-04-12T18:22:37.916372880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:22:37.918796 env[1106]: time="2024-04-12T18:22:37.918761040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:22:37.918796 env[1106]: time="2024-04-12T18:22:37.918792760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:22:37.919042 env[1106]: time="2024-04-12T18:22:37.919014480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:22:37.919042 env[1106]: time="2024-04-12T18:22:37.919038080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:22:37.919104 env[1106]: time="2024-04-12T18:22:37.919051320Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:22:37.919104 env[1106]: time="2024-04-12T18:22:37.919060800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:22:37.919154 env[1106]: time="2024-04-12T18:22:37.919135680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:22:37.919375 env[1106]: time="2024-04-12T18:22:37.919352760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:22:37.919508 env[1106]: time="2024-04-12T18:22:37.919485520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:22:37.919508 env[1106]: time="2024-04-12T18:22:37.919504840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:22:37.919569 env[1106]: time="2024-04-12T18:22:37.919559280Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:22:37.919593 env[1106]: time="2024-04-12T18:22:37.919572160Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:22:37.922854 env[1106]: time="2024-04-12T18:22:37.922823480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:22:37.922854 env[1106]: time="2024-04-12T18:22:37.922855440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:22:37.922943 env[1106]: time="2024-04-12T18:22:37.922874680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:22:37.922943 env[1106]: time="2024-04-12T18:22:37.922906240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:22:37.922943 env[1106]: time="2024-04-12T18:22:37.922923680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:22:37.923007 env[1106]: time="2024-04-12T18:22:37.922946280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:22:37.923007 env[1106]: time="2024-04-12T18:22:37.922963000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:22:37.923325 env[1106]: time="2024-04-12T18:22:37.923299240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:22:37.923367 env[1106]: time="2024-04-12T18:22:37.923325840Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:22:37.923367 env[1106]: time="2024-04-12T18:22:37.923340200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:22:37.923367 env[1106]: time="2024-04-12T18:22:37.923352440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:22:37.923367 env[1106]: time="2024-04-12T18:22:37.923364960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:22:37.923496 env[1106]: time="2024-04-12T18:22:37.923477120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:22:37.923571 env[1106]: time="2024-04-12T18:22:37.923555000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:22:37.923880 env[1106]: time="2024-04-12T18:22:37.923849160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:22:37.923921 env[1106]: time="2024-04-12T18:22:37.923893480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.923921 env[1106]: time="2024-04-12T18:22:37.923910080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:22:37.924038 env[1106]: time="2024-04-12T18:22:37.924020800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924071 env[1106]: time="2024-04-12T18:22:37.924039760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924071 env[1106]: time="2024-04-12T18:22:37.924052960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924071 env[1106]: time="2024-04-12T18:22:37.924064760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924142 env[1106]: time="2024-04-12T18:22:37.924076400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924142 env[1106]: time="2024-04-12T18:22:37.924088360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924142 env[1106]: time="2024-04-12T18:22:37.924100480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924142 env[1106]: time="2024-04-12T18:22:37.924111880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924142 env[1106]: time="2024-04-12T18:22:37.924126520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:22:37.924291 env[1106]: time="2024-04-12T18:22:37.924268560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924319 env[1106]: time="2024-04-12T18:22:37.924293680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924319 env[1106]: time="2024-04-12T18:22:37.924307880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924357 env[1106]: time="2024-04-12T18:22:37.924320000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:22:37.924357 env[1106]: time="2024-04-12T18:22:37.924334000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:22:37.924357 env[1106]: time="2024-04-12T18:22:37.924346560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:22:37.924417 env[1106]: time="2024-04-12T18:22:37.924363000Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:22:37.924417 env[1106]: time="2024-04-12T18:22:37.924395280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:22:37.924638 env[1106]: time="2024-04-12T18:22:37.924577720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:22:37.928138 env[1106]: time="2024-04-12T18:22:37.924648880Z" level=info msg="Connect containerd service" Apr 12 18:22:37.928138 env[1106]: time="2024-04-12T18:22:37.924683200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:22:37.928138 env[1106]: time="2024-04-12T18:22:37.925359440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:22:37.928138 env[1106]: time="2024-04-12T18:22:37.925710560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:22:37.928138 env[1106]: time="2024-04-12T18:22:37.925753480Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:22:37.928138 env[1106]: time="2024-04-12T18:22:37.925796480Z" level=info msg="containerd successfully booted in 0.043323s" Apr 12 18:22:37.927256 systemd[1]: Started containerd.service. Apr 12 18:22:37.945905 env[1106]: time="2024-04-12T18:22:37.945845560Z" level=info msg="Start subscribing containerd event" Apr 12 18:22:37.945982 env[1106]: time="2024-04-12T18:22:37.945925360Z" level=info msg="Start recovering state" Apr 12 18:22:37.946019 env[1106]: time="2024-04-12T18:22:37.946005640Z" level=info msg="Start event monitor" Apr 12 18:22:37.946046 env[1106]: time="2024-04-12T18:22:37.946028200Z" level=info msg="Start snapshots syncer" Apr 12 18:22:37.946046 env[1106]: time="2024-04-12T18:22:37.946042000Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:22:37.946085 env[1106]: time="2024-04-12T18:22:37.946049640Z" level=info msg="Start streaming server" Apr 12 18:22:37.954868 tar[1099]: ./vlan Apr 12 18:22:38.016567 tar[1099]: ./host-device Apr 12 18:22:38.069808 tar[1099]: ./tuning Apr 12 18:22:38.107139 tar[1099]: ./vrf Apr 12 18:22:38.150329 tar[1099]: ./sbr Apr 12 18:22:38.190071 tar[1099]: ./tap Apr 12 18:22:38.206731 tar[1101]: linux-arm64/LICENSE Apr 12 18:22:38.206923 tar[1101]: linux-arm64/README.md Apr 12 18:22:38.211213 systemd[1]: Finished prepare-helm.service. Apr 12 18:22:38.214050 locksmithd[1132]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:22:38.231765 systemd[1]: Finished prepare-critools.service. Apr 12 18:22:38.234150 tar[1099]: ./dhcp Apr 12 18:22:38.315616 tar[1099]: ./static Apr 12 18:22:38.337585 tar[1099]: ./firewall Apr 12 18:22:38.368359 tar[1099]: ./macvlan Apr 12 18:22:38.396340 tar[1099]: ./dummy Apr 12 18:22:38.423954 tar[1099]: ./bridge Apr 12 18:22:38.453966 tar[1099]: ./ipvlan Apr 12 18:22:38.481561 tar[1099]: ./portmap Apr 12 18:22:38.507781 tar[1099]: ./host-local Apr 12 18:22:38.540044 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:22:38.731800 systemd-networkd[1001]: eth0: Gained IPv6LL Apr 12 18:22:39.952836 sshd_keygen[1097]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:22:39.968928 systemd[1]: Finished sshd-keygen.service. Apr 12 18:22:39.970976 systemd[1]: Starting issuegen.service... Apr 12 18:22:39.975022 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:22:39.975154 systemd[1]: Finished issuegen.service. Apr 12 18:22:39.976929 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:22:39.982533 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:22:39.984392 systemd[1]: Started getty@tty1.service. Apr 12 18:22:39.986085 systemd[1]: Started serial-getty@ttyAMA0.service. Apr 12 18:22:39.987046 systemd[1]: Reached target getty.target. Apr 12 18:22:39.987734 systemd[1]: Reached target multi-user.target. Apr 12 18:22:39.989397 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:22:39.994978 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:22:39.995111 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:22:39.995994 systemd[1]: Startup finished in 558ms (kernel) + 5.942s (initrd) + 5.482s (userspace) = 11.983s. Apr 12 18:22:41.239125 systemd[1]: Created slice system-sshd.slice. Apr 12 18:22:41.240169 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:46928.service. Apr 12 18:22:41.289240 sshd[1163]: Accepted publickey for core from 10.0.0.1 port 46928 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:22:41.293263 sshd[1163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:22:41.305804 systemd[1]: Created slice user-500.slice. Apr 12 18:22:41.306759 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:22:41.310306 systemd-logind[1091]: New session 1 of user core. Apr 12 18:22:41.314878 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:22:41.316013 systemd[1]: Starting user@500.service... Apr 12 18:22:41.319356 (systemd)[1166]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:22:41.377364 systemd[1166]: Queued start job for default target default.target. Apr 12 18:22:41.377778 systemd[1166]: Reached target paths.target. Apr 12 18:22:41.377797 systemd[1166]: Reached target sockets.target. Apr 12 18:22:41.377807 systemd[1166]: Reached target timers.target. Apr 12 18:22:41.377817 systemd[1166]: Reached target basic.target. Apr 12 18:22:41.377863 systemd[1166]: Reached target default.target. Apr 12 18:22:41.377899 systemd[1166]: Startup finished in 53ms. Apr 12 18:22:41.377938 systemd[1]: Started user@500.service. Apr 12 18:22:41.378802 systemd[1]: Started session-1.scope. Apr 12 18:22:41.427576 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:46942.service. Apr 12 18:22:41.473659 sshd[1175]: Accepted publickey for core from 10.0.0.1 port 46942 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:22:41.475200 sshd[1175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:22:41.479803 systemd-logind[1091]: New session 2 of user core. Apr 12 18:22:41.480139 systemd[1]: Started session-2.scope. Apr 12 18:22:41.537164 sshd[1175]: pam_unix(sshd:session): session closed for user core Apr 12 18:22:41.540411 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:46950.service. Apr 12 18:22:41.540928 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:46942.service: Deactivated successfully. Apr 12 18:22:41.541673 systemd[1]: session-2.scope: Deactivated successfully. Apr 12 18:22:41.542248 systemd-logind[1091]: Session 2 logged out. Waiting for processes to exit. Apr 12 18:22:41.543124 systemd-logind[1091]: Removed session 2. Apr 12 18:22:41.577036 sshd[1180]: Accepted publickey for core from 10.0.0.1 port 46950 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:22:41.578454 sshd[1180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:22:41.582485 systemd-logind[1091]: New session 3 of user core. Apr 12 18:22:41.583183 systemd[1]: Started session-3.scope. Apr 12 18:22:41.632135 sshd[1180]: pam_unix(sshd:session): session closed for user core Apr 12 18:22:41.635915 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:46950.service: Deactivated successfully. Apr 12 18:22:41.636500 systemd[1]: session-3.scope: Deactivated successfully. Apr 12 18:22:41.637036 systemd-logind[1091]: Session 3 logged out. Waiting for processes to exit. Apr 12 18:22:41.638024 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:46964.service. Apr 12 18:22:41.638740 systemd-logind[1091]: Removed session 3. Apr 12 18:22:41.674784 sshd[1187]: Accepted publickey for core from 10.0.0.1 port 46964 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:22:41.675818 sshd[1187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:22:41.679742 systemd-logind[1091]: New session 4 of user core. Apr 12 18:22:41.680833 systemd[1]: Started session-4.scope. Apr 12 18:22:41.736518 sshd[1187]: pam_unix(sshd:session): session closed for user core Apr 12 18:22:41.740836 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:46972.service. Apr 12 18:22:41.741318 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:46964.service: Deactivated successfully. Apr 12 18:22:41.741998 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:22:41.742624 systemd-logind[1091]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:22:41.744118 systemd-logind[1091]: Removed session 4. Apr 12 18:22:41.777974 sshd[1192]: Accepted publickey for core from 10.0.0.1 port 46972 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:22:41.779578 sshd[1192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:22:41.784848 systemd-logind[1091]: New session 5 of user core. Apr 12 18:22:41.784869 systemd[1]: Started session-5.scope. Apr 12 18:22:41.846998 sudo[1196]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:22:41.847191 sudo[1196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:22:42.415164 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:22:42.420376 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:22:42.420660 systemd[1]: Reached target network-online.target. Apr 12 18:22:42.421906 systemd[1]: Starting docker.service... Apr 12 18:22:42.500580 env[1213]: time="2024-04-12T18:22:42.500523309Z" level=info msg="Starting up" Apr 12 18:22:42.501963 env[1213]: time="2024-04-12T18:22:42.501934670Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:22:42.502052 env[1213]: time="2024-04-12T18:22:42.502038789Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:22:42.502113 env[1213]: time="2024-04-12T18:22:42.502098336Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:22:42.502162 env[1213]: time="2024-04-12T18:22:42.502150690Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:22:42.504098 env[1213]: time="2024-04-12T18:22:42.504074077Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:22:42.504188 env[1213]: time="2024-04-12T18:22:42.504173793Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:22:42.504246 env[1213]: time="2024-04-12T18:22:42.504231493Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:22:42.504296 env[1213]: time="2024-04-12T18:22:42.504284751Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:22:42.729313 env[1213]: time="2024-04-12T18:22:42.729225511Z" level=info msg="Loading containers: start." Apr 12 18:22:42.842645 kernel: Initializing XFRM netlink socket Apr 12 18:22:42.865059 env[1213]: time="2024-04-12T18:22:42.865014089Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:22:42.916722 systemd-networkd[1001]: docker0: Link UP Apr 12 18:22:42.933780 env[1213]: time="2024-04-12T18:22:42.933743156Z" level=info msg="Loading containers: done." Apr 12 18:22:42.953964 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2269170216-merged.mount: Deactivated successfully. Apr 12 18:22:42.955759 env[1213]: time="2024-04-12T18:22:42.955727148Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:22:42.955996 env[1213]: time="2024-04-12T18:22:42.955978818Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:22:42.956145 env[1213]: time="2024-04-12T18:22:42.956126880Z" level=info msg="Daemon has completed initialization" Apr 12 18:22:42.968227 systemd[1]: Started docker.service. Apr 12 18:22:42.976383 env[1213]: time="2024-04-12T18:22:42.976267982Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:22:42.991911 systemd[1]: Reloading. Apr 12 18:22:43.036943 /usr/lib/systemd/system-generators/torcx-generator[1356]: time="2024-04-12T18:22:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:22:43.036975 /usr/lib/systemd/system-generators/torcx-generator[1356]: time="2024-04-12T18:22:43Z" level=info msg="torcx already run" Apr 12 18:22:43.092442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:22:43.092461 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:22:43.109896 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:22:43.166705 systemd[1]: Started kubelet.service. Apr 12 18:22:43.308110 kubelet[1392]: E0412 18:22:43.308004 1392 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:22:43.310534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:22:43.310681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:22:43.439819 env[1106]: time="2024-04-12T18:22:43.439757896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\"" Apr 12 18:22:43.998852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1127940955.mount: Deactivated successfully. Apr 12 18:22:46.090227 env[1106]: time="2024-04-12T18:22:46.090154442Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:46.092132 env[1106]: time="2024-04-12T18:22:46.092087865Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:46.093736 env[1106]: time="2024-04-12T18:22:46.093708904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:46.095294 env[1106]: time="2024-04-12T18:22:46.095260378Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:46.096066 env[1106]: time="2024-04-12T18:22:46.096034650Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\" returns image reference \"sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794\"" Apr 12 18:22:46.105379 env[1106]: time="2024-04-12T18:22:46.105342544Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\"" Apr 12 18:22:49.571235 env[1106]: time="2024-04-12T18:22:49.571187842Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:49.572871 env[1106]: time="2024-04-12T18:22:49.572836832Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:49.574515 env[1106]: time="2024-04-12T18:22:49.574479586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:49.576005 env[1106]: time="2024-04-12T18:22:49.575972687Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:49.577487 env[1106]: time="2024-04-12T18:22:49.577449659Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\" returns image reference \"sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195\"" Apr 12 18:22:49.586363 env[1106]: time="2024-04-12T18:22:49.586324165Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\"" Apr 12 18:22:51.075397 env[1106]: time="2024-04-12T18:22:51.075291940Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:51.078226 env[1106]: time="2024-04-12T18:22:51.078188448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:51.079399 env[1106]: time="2024-04-12T18:22:51.079368371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:51.081932 env[1106]: time="2024-04-12T18:22:51.081899318Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:51.082319 env[1106]: time="2024-04-12T18:22:51.082290942Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\" returns image reference \"sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb\"" Apr 12 18:22:51.093657 env[1106]: time="2024-04-12T18:22:51.093617655Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\"" Apr 12 18:22:52.103608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688018079.mount: Deactivated successfully. Apr 12 18:22:52.598451 env[1106]: time="2024-04-12T18:22:52.598340490Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:52.599849 env[1106]: time="2024-04-12T18:22:52.599811733Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:52.604037 env[1106]: time="2024-04-12T18:22:52.603992453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:52.605575 env[1106]: time="2024-04-12T18:22:52.605545959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:52.605957 env[1106]: time="2024-04-12T18:22:52.605932385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\" returns image reference \"sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775\"" Apr 12 18:22:52.615843 env[1106]: time="2024-04-12T18:22:52.615802616Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 12 18:22:53.200086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount806482072.mount: Deactivated successfully. Apr 12 18:22:53.561388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:22:53.561560 systemd[1]: Stopped kubelet.service. Apr 12 18:22:53.563046 systemd[1]: Started kubelet.service. Apr 12 18:22:53.600923 kubelet[1444]: E0412 18:22:53.600869 1444 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:22:53.603942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:22:53.604072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:22:54.147332 env[1106]: time="2024-04-12T18:22:54.147262418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:54.149339 env[1106]: time="2024-04-12T18:22:54.149299982Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:54.151608 env[1106]: time="2024-04-12T18:22:54.151580492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:54.153330 env[1106]: time="2024-04-12T18:22:54.153282195Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:54.154230 env[1106]: time="2024-04-12T18:22:54.154197700Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 12 18:22:54.163044 env[1106]: time="2024-04-12T18:22:54.163003252Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:22:54.633892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1571433054.mount: Deactivated successfully. Apr 12 18:22:54.637717 env[1106]: time="2024-04-12T18:22:54.637673435Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:54.639040 env[1106]: time="2024-04-12T18:22:54.638999019Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:54.640370 env[1106]: time="2024-04-12T18:22:54.640333532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:54.641858 env[1106]: time="2024-04-12T18:22:54.641830155Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:54.642312 env[1106]: time="2024-04-12T18:22:54.642281490Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 12 18:22:54.650933 env[1106]: time="2024-04-12T18:22:54.650897907Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Apr 12 18:22:55.165841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160765868.mount: Deactivated successfully. Apr 12 18:22:58.525583 env[1106]: time="2024-04-12T18:22:58.525531895Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:58.527652 env[1106]: time="2024-04-12T18:22:58.527614613Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:58.529529 env[1106]: time="2024-04-12T18:22:58.529500097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:58.532057 env[1106]: time="2024-04-12T18:22:58.532028180Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:58.532909 env[1106]: time="2024-04-12T18:22:58.532867095Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Apr 12 18:23:01.955914 systemd[1]: Stopped kubelet.service. Apr 12 18:23:01.968369 systemd[1]: Reloading. Apr 12 18:23:02.017215 /usr/lib/systemd/system-generators/torcx-generator[1562]: time="2024-04-12T18:23:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:23:02.017573 /usr/lib/systemd/system-generators/torcx-generator[1562]: time="2024-04-12T18:23:02Z" level=info msg="torcx already run" Apr 12 18:23:02.071921 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:23:02.072072 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:23:02.088911 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:23:02.151133 systemd[1]: Started kubelet.service. Apr 12 18:23:02.185099 kubelet[1600]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:23:02.185099 kubelet[1600]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:23:02.185099 kubelet[1600]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:23:02.185412 kubelet[1600]: I0412 18:23:02.185138 1600 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:23:02.855495 kubelet[1600]: I0412 18:23:02.855446 1600 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:23:02.855495 kubelet[1600]: I0412 18:23:02.855484 1600 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:23:02.855742 kubelet[1600]: I0412 18:23:02.855712 1600 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:23:02.861118 kubelet[1600]: E0412 18:23:02.861090 1600 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:02.861784 kubelet[1600]: I0412 18:23:02.861769 1600 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:23:02.868152 kubelet[1600]: I0412 18:23:02.868117 1600 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:23:02.868337 kubelet[1600]: I0412 18:23:02.868309 1600 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:23:02.868494 kubelet[1600]: I0412 18:23:02.868470 1600 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:23:02.868566 kubelet[1600]: I0412 18:23:02.868497 1600 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:23:02.868566 kubelet[1600]: I0412 18:23:02.868507 1600 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:23:02.868618 kubelet[1600]: I0412 18:23:02.868603 1600 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:23:02.869039 kubelet[1600]: I0412 18:23:02.869012 1600 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:23:02.869039 kubelet[1600]: I0412 18:23:02.869036 1600 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:23:02.869107 kubelet[1600]: I0412 18:23:02.869056 1600 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:23:02.869107 kubelet[1600]: I0412 18:23:02.869072 1600 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:23:02.869417 kubelet[1600]: W0412 18:23:02.869348 1600 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:02.869417 kubelet[1600]: E0412 18:23:02.869416 1600 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:02.869857 kubelet[1600]: I0412 18:23:02.869842 1600 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:23:02.869921 kubelet[1600]: W0412 18:23:02.869859 1600 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:02.869921 kubelet[1600]: E0412 18:23:02.869896 1600 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:02.870271 kubelet[1600]: I0412 18:23:02.870257 1600 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:23:02.870375 kubelet[1600]: W0412 18:23:02.870364 1600 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:23:02.871153 kubelet[1600]: I0412 18:23:02.871121 1600 server.go:1256] "Started kubelet" Apr 12 18:23:02.871197 kubelet[1600]: I0412 18:23:02.871172 1600 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:23:02.871907 kubelet[1600]: I0412 18:23:02.871883 1600 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:23:02.872802 kubelet[1600]: I0412 18:23:02.872772 1600 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:23:02.873316 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:23:02.873400 kubelet[1600]: I0412 18:23:02.872975 1600 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:23:02.873400 kubelet[1600]: I0412 18:23:02.873059 1600 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:23:02.873400 kubelet[1600]: E0412 18:23:02.873157 1600 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17c59b7a4d64a993 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-04-12 18:23:02.871099795 +0000 UTC m=+0.716840630,LastTimestamp:2024-04-12 18:23:02.871099795 +0000 UTC m=+0.716840630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 12 18:23:02.874253 kubelet[1600]: E0412 18:23:02.874152 1600 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:23:02.874253 kubelet[1600]: I0412 18:23:02.874184 1600 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:23:02.874357 kubelet[1600]: I0412 18:23:02.874257 1600 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:23:02.874357 kubelet[1600]: I0412 18:23:02.874302 1600 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:23:02.874587 kubelet[1600]: W0412 18:23:02.874549 1600 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:02.874653 kubelet[1600]: E0412 18:23:02.874592 1600 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:02.874804 kubelet[1600]: E0412 18:23:02.874784 1600 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:23:02.875078 kubelet[1600]: E0412 18:23:02.875030 1600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Apr 12 18:23:02.875437 kubelet[1600]: I0412 18:23:02.875410 1600 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:23:02.875510 kubelet[1600]: I0412 18:23:02.875487 1600 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:23:02.876365 kubelet[1600]: I0412 18:23:02.876343 1600 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:23:02.889796 kubelet[1600]: I0412 18:23:02.889765 1600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:23:02.890681 kubelet[1600]: I0412 18:23:02.890660 1600 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:23:02.890681 kubelet[1600]: I0412 18:23:02.890678 1600 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:23:02.890793 kubelet[1600]: I0412 18:23:02.890692 1600 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:23:02.890908 kubelet[1600]: I0412 18:23:02.890886 1600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:23:02.891129 kubelet[1600]: I0412 18:23:02.891110 1600 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:23:02.891226 kubelet[1600]: I0412 18:23:02.891214 1600 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:23:02.891329 kubelet[1600]: E0412 18:23:02.891318 1600 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:23:02.892053 kubelet[1600]: W0412 18:23:02.892008 1600 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:02.892268 kubelet[1600]: E0412 18:23:02.892252 1600 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:02.892762 kubelet[1600]: I0412 18:23:02.892581 1600 policy_none.go:49] "None policy: Start" Apr 12 18:23:02.893350 kubelet[1600]: I0412 18:23:02.893331 1600 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:23:02.893417 kubelet[1600]: I0412 18:23:02.893370 1600 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:23:02.898277 systemd[1]: Created slice kubepods.slice. Apr 12 18:23:02.902005 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 18:23:02.904383 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 18:23:02.914306 kubelet[1600]: I0412 18:23:02.914261 1600 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:23:02.914491 kubelet[1600]: I0412 18:23:02.914467 1600 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:23:02.915820 kubelet[1600]: E0412 18:23:02.915754 1600 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 12 18:23:02.975779 kubelet[1600]: I0412 18:23:02.975758 1600 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:23:02.976176 kubelet[1600]: E0412 18:23:02.976156 1600 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Apr 12 18:23:02.992303 kubelet[1600]: I0412 18:23:02.992280 1600 topology_manager.go:215] "Topology Admit Handler" podUID="6343dd632a9027f2776d2ccca1450238" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 12 18:23:02.993178 kubelet[1600]: I0412 18:23:02.993155 1600 topology_manager.go:215] "Topology Admit Handler" podUID="f4e8212a5db7e0401319814fa9ad65c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 12 18:23:02.993990 kubelet[1600]: I0412 18:23:02.993962 1600 topology_manager.go:215] "Topology Admit Handler" podUID="5d5c5aff921df216fcba2c51c322ceb1" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 12 18:23:02.998456 systemd[1]: Created slice kubepods-burstable-pod6343dd632a9027f2776d2ccca1450238.slice. Apr 12 18:23:03.013321 systemd[1]: Created slice kubepods-burstable-podf4e8212a5db7e0401319814fa9ad65c9.slice. Apr 12 18:23:03.016586 systemd[1]: Created slice kubepods-burstable-pod5d5c5aff921df216fcba2c51c322ceb1.slice. Apr 12 18:23:03.075492 kubelet[1600]: E0412 18:23:03.075448 1600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Apr 12 18:23:03.175971 kubelet[1600]: I0412 18:23:03.175278 1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6343dd632a9027f2776d2ccca1450238-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6343dd632a9027f2776d2ccca1450238\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:23:03.175971 kubelet[1600]: I0412 18:23:03.175346 1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:23:03.175971 kubelet[1600]: I0412 18:23:03.175381 1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:23:03.175971 kubelet[1600]: I0412 18:23:03.175418 1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:23:03.175971 kubelet[1600]: I0412 18:23:03.175455 1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:23:03.176884 kubelet[1600]: I0412 18:23:03.175486 1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6343dd632a9027f2776d2ccca1450238-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6343dd632a9027f2776d2ccca1450238\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:23:03.176884 kubelet[1600]: I0412 18:23:03.175518 1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6343dd632a9027f2776d2ccca1450238-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6343dd632a9027f2776d2ccca1450238\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:23:03.176884 kubelet[1600]: I0412 18:23:03.175550 1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:23:03.176884 kubelet[1600]: I0412 18:23:03.175581 1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d5c5aff921df216fcba2c51c322ceb1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5d5c5aff921df216fcba2c51c322ceb1\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:23:03.177692 kubelet[1600]: I0412 18:23:03.177667 1600 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:23:03.178006 kubelet[1600]: E0412 18:23:03.177978 1600 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Apr 12 18:23:03.312408 kubelet[1600]: E0412 18:23:03.312383 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:03.313359 env[1106]: time="2024-04-12T18:23:03.313306573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6343dd632a9027f2776d2ccca1450238,Namespace:kube-system,Attempt:0,}" Apr 12 18:23:03.315456 kubelet[1600]: E0412 18:23:03.315437 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:03.315851 env[1106]: time="2024-04-12T18:23:03.315799187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f4e8212a5db7e0401319814fa9ad65c9,Namespace:kube-system,Attempt:0,}" Apr 12 18:23:03.318364 kubelet[1600]: E0412 18:23:03.318338 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:03.318812 env[1106]: time="2024-04-12T18:23:03.318754713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5d5c5aff921df216fcba2c51c322ceb1,Namespace:kube-system,Attempt:0,}" Apr 12 18:23:03.476380 kubelet[1600]: E0412 18:23:03.476294 1600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Apr 12 18:23:03.579925 kubelet[1600]: I0412 18:23:03.579902 1600 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:23:03.580304 kubelet[1600]: E0412 18:23:03.580282 1600 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Apr 12 18:23:03.748343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146836514.mount: Deactivated successfully. Apr 12 18:23:03.754886 env[1106]: time="2024-04-12T18:23:03.754832505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.756103 env[1106]: time="2024-04-12T18:23:03.756071680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.756771 env[1106]: time="2024-04-12T18:23:03.756746369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.758820 env[1106]: time="2024-04-12T18:23:03.758791495Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.760372 env[1106]: time="2024-04-12T18:23:03.760340423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.761510 env[1106]: time="2024-04-12T18:23:03.761480822Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.762432 env[1106]: time="2024-04-12T18:23:03.762399334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.764680 env[1106]: time="2024-04-12T18:23:03.764619915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.768434 env[1106]: time="2024-04-12T18:23:03.768402970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.769945 env[1106]: time="2024-04-12T18:23:03.769914058Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.770673 env[1106]: time="2024-04-12T18:23:03.770646327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.771571 env[1106]: time="2024-04-12T18:23:03.771540665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:03.780348 kubelet[1600]: W0412 18:23:03.780289 1600 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:03.780447 kubelet[1600]: E0412 18:23:03.780357 1600 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:03.798417 env[1106]: time="2024-04-12T18:23:03.798335479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:23:03.798417 env[1106]: time="2024-04-12T18:23:03.798374558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:23:03.798417 env[1106]: time="2024-04-12T18:23:03.798394896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:23:03.798643 env[1106]: time="2024-04-12T18:23:03.798582978Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fa2e51d00f1ab1f36979523766ca03e648182cccfed2c0615d37c719a181a3d pid=1651 runtime=io.containerd.runc.v2 Apr 12 18:23:03.800210 env[1106]: time="2024-04-12T18:23:03.800148050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:23:03.800210 env[1106]: time="2024-04-12T18:23:03.800184611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:23:03.800210 env[1106]: time="2024-04-12T18:23:03.800195000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:23:03.800379 env[1106]: time="2024-04-12T18:23:03.800337930Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/188bbfab2dd992609e17c471d8db2124f0870b20f4f4de5dde65966b93f2b243 pid=1652 runtime=io.containerd.runc.v2 Apr 12 18:23:03.800693 env[1106]: time="2024-04-12T18:23:03.800642808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:23:03.800790 env[1106]: time="2024-04-12T18:23:03.800677452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:23:03.800879 env[1106]: time="2024-04-12T18:23:03.800781942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:23:03.801079 env[1106]: time="2024-04-12T18:23:03.801043866Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcf37c58f9417e16b38eb40c740370ca8e70ce86efd16219ceeaa58d5774408e pid=1663 runtime=io.containerd.runc.v2 Apr 12 18:23:03.813497 systemd[1]: Started cri-containerd-188bbfab2dd992609e17c471d8db2124f0870b20f4f4de5dde65966b93f2b243.scope. Apr 12 18:23:03.816968 systemd[1]: Started cri-containerd-2fa2e51d00f1ab1f36979523766ca03e648182cccfed2c0615d37c719a181a3d.scope. Apr 12 18:23:03.819742 systemd[1]: Started cri-containerd-dcf37c58f9417e16b38eb40c740370ca8e70ce86efd16219ceeaa58d5774408e.scope. Apr 12 18:23:03.835061 kubelet[1600]: E0412 18:23:03.835018 1600 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17c59b7a4d64a993 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-04-12 18:23:02.871099795 +0000 UTC m=+0.716840630,LastTimestamp:2024-04-12 18:23:02.871099795 +0000 UTC m=+0.716840630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 12 18:23:03.881460 env[1106]: time="2024-04-12T18:23:03.881397421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5d5c5aff921df216fcba2c51c322ceb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"188bbfab2dd992609e17c471d8db2124f0870b20f4f4de5dde65966b93f2b243\"" Apr 12 18:23:03.884988 kubelet[1600]: E0412 18:23:03.884094 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:03.885082 env[1106]: time="2024-04-12T18:23:03.884513019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f4e8212a5db7e0401319814fa9ad65c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcf37c58f9417e16b38eb40c740370ca8e70ce86efd16219ceeaa58d5774408e\"" Apr 12 18:23:03.885653 kubelet[1600]: E0412 18:23:03.885336 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:03.886462 env[1106]: time="2024-04-12T18:23:03.886409181Z" level=info msg="CreateContainer within sandbox \"188bbfab2dd992609e17c471d8db2124f0870b20f4f4de5dde65966b93f2b243\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:23:03.887120 env[1106]: time="2024-04-12T18:23:03.887090224Z" level=info msg="CreateContainer within sandbox \"dcf37c58f9417e16b38eb40c740370ca8e70ce86efd16219ceeaa58d5774408e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:23:03.892375 env[1106]: time="2024-04-12T18:23:03.892324870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6343dd632a9027f2776d2ccca1450238,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fa2e51d00f1ab1f36979523766ca03e648182cccfed2c0615d37c719a181a3d\"" Apr 12 18:23:03.892961 kubelet[1600]: E0412 18:23:03.892942 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:03.894686 env[1106]: time="2024-04-12T18:23:03.894655215Z" level=info msg="CreateContainer within sandbox \"2fa2e51d00f1ab1f36979523766ca03e648182cccfed2c0615d37c719a181a3d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:23:03.936407 env[1106]: time="2024-04-12T18:23:03.936348695Z" level=info msg="CreateContainer within sandbox \"188bbfab2dd992609e17c471d8db2124f0870b20f4f4de5dde65966b93f2b243\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a791df44c01bec73ffd88627b77377c620dddac3c6cc0bb58c1c4737c2ba1b39\"" Apr 12 18:23:03.937053 env[1106]: time="2024-04-12T18:23:03.937027819Z" level=info msg="StartContainer for \"a791df44c01bec73ffd88627b77377c620dddac3c6cc0bb58c1c4737c2ba1b39\"" Apr 12 18:23:03.939024 env[1106]: time="2024-04-12T18:23:03.938976247Z" level=info msg="CreateContainer within sandbox \"dcf37c58f9417e16b38eb40c740370ca8e70ce86efd16219ceeaa58d5774408e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f54850fe1ae8fd451c453353d1fcff99e0301382fc1844e0a901b789aa825823\"" Apr 12 18:23:03.939362 env[1106]: time="2024-04-12T18:23:03.939334669Z" level=info msg="StartContainer for \"f54850fe1ae8fd451c453353d1fcff99e0301382fc1844e0a901b789aa825823\"" Apr 12 18:23:03.939732 env[1106]: time="2024-04-12T18:23:03.939695529Z" level=info msg="CreateContainer within sandbox \"2fa2e51d00f1ab1f36979523766ca03e648182cccfed2c0615d37c719a181a3d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4942089f2ed30d2b22c595a22ca5c83884df15be46a3e6cd58b8a13d23ff0e19\"" Apr 12 18:23:03.940050 env[1106]: time="2024-04-12T18:23:03.940022904Z" level=info msg="StartContainer for \"4942089f2ed30d2b22c595a22ca5c83884df15be46a3e6cd58b8a13d23ff0e19\"" Apr 12 18:23:03.954385 systemd[1]: Started cri-containerd-a791df44c01bec73ffd88627b77377c620dddac3c6cc0bb58c1c4737c2ba1b39.scope. Apr 12 18:23:03.956391 systemd[1]: Started cri-containerd-4942089f2ed30d2b22c595a22ca5c83884df15be46a3e6cd58b8a13d23ff0e19.scope. Apr 12 18:23:03.972414 kubelet[1600]: W0412 18:23:03.972272 1600 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:03.972414 kubelet[1600]: E0412 18:23:03.972340 1600 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:03.973042 systemd[1]: Started cri-containerd-f54850fe1ae8fd451c453353d1fcff99e0301382fc1844e0a901b789aa825823.scope. Apr 12 18:23:04.018452 kubelet[1600]: W0412 18:23:04.017331 1600 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:04.018452 kubelet[1600]: E0412 18:23:04.017459 1600 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 12 18:23:04.040459 env[1106]: time="2024-04-12T18:23:04.038345501Z" level=info msg="StartContainer for \"4942089f2ed30d2b22c595a22ca5c83884df15be46a3e6cd58b8a13d23ff0e19\" returns successfully" Apr 12 18:23:04.040459 env[1106]: time="2024-04-12T18:23:04.039171100Z" level=info msg="StartContainer for \"f54850fe1ae8fd451c453353d1fcff99e0301382fc1844e0a901b789aa825823\" returns successfully" Apr 12 18:23:04.047102 env[1106]: time="2024-04-12T18:23:04.044762946Z" level=info msg="StartContainer for \"a791df44c01bec73ffd88627b77377c620dddac3c6cc0bb58c1c4737c2ba1b39\" returns successfully" Apr 12 18:23:04.381996 kubelet[1600]: I0412 18:23:04.381961 1600 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:23:04.902966 kubelet[1600]: E0412 18:23:04.902933 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:04.905762 kubelet[1600]: E0412 18:23:04.905736 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:04.907722 kubelet[1600]: E0412 18:23:04.907700 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:05.514426 kubelet[1600]: E0412 18:23:05.514388 1600 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 12 18:23:05.603728 kubelet[1600]: I0412 18:23:05.603690 1600 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 12 18:23:05.618953 kubelet[1600]: E0412 18:23:05.618910 1600 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:23:05.719583 kubelet[1600]: E0412 18:23:05.719544 1600 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:23:05.871151 kubelet[1600]: I0412 18:23:05.871122 1600 apiserver.go:52] "Watching apiserver" Apr 12 18:23:05.874682 kubelet[1600]: I0412 18:23:05.874658 1600 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:23:05.913227 kubelet[1600]: E0412 18:23:05.913197 1600 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 12 18:23:05.913477 kubelet[1600]: E0412 18:23:05.913460 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:05.913753 kubelet[1600]: E0412 18:23:05.913733 1600 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 12 18:23:05.914106 kubelet[1600]: E0412 18:23:05.914088 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:05.914174 kubelet[1600]: E0412 18:23:05.914158 1600 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 12 18:23:05.914539 kubelet[1600]: E0412 18:23:05.914523 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:06.913519 kubelet[1600]: E0412 18:23:06.913469 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:06.913904 kubelet[1600]: E0412 18:23:06.913667 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:07.910469 kubelet[1600]: E0412 18:23:07.910431 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:07.910743 kubelet[1600]: E0412 18:23:07.910724 1600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:08.316000 systemd[1]: Reloading. Apr 12 18:23:08.360090 /usr/lib/systemd/system-generators/torcx-generator[1894]: time="2024-04-12T18:23:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:23:08.360120 /usr/lib/systemd/system-generators/torcx-generator[1894]: time="2024-04-12T18:23:08Z" level=info msg="torcx already run" Apr 12 18:23:08.422382 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:23:08.422400 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:23:08.439072 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:23:08.514307 systemd[1]: Stopping kubelet.service... Apr 12 18:23:08.516109 kubelet[1600]: I0412 18:23:08.515450 1600 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:23:08.533317 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:23:08.533500 systemd[1]: Stopped kubelet.service. Apr 12 18:23:08.535226 systemd[1]: Started kubelet.service. Apr 12 18:23:08.590879 kubelet[1932]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:23:08.591225 kubelet[1932]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:23:08.591288 kubelet[1932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:23:08.591424 kubelet[1932]: I0412 18:23:08.591391 1932 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:23:08.596253 kubelet[1932]: I0412 18:23:08.596210 1932 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:23:08.596253 kubelet[1932]: I0412 18:23:08.596246 1932 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:23:08.596442 kubelet[1932]: I0412 18:23:08.596425 1932 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:23:08.598841 kubelet[1932]: I0412 18:23:08.598798 1932 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:23:08.601340 kubelet[1932]: I0412 18:23:08.601302 1932 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:23:08.606361 kubelet[1932]: I0412 18:23:08.606322 1932 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:23:08.606551 kubelet[1932]: I0412 18:23:08.606509 1932 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:23:08.606707 kubelet[1932]: I0412 18:23:08.606684 1932 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:23:08.606707 kubelet[1932]: I0412 18:23:08.606703 1932 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:23:08.606834 kubelet[1932]: I0412 18:23:08.606714 1932 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:23:08.606834 kubelet[1932]: I0412 18:23:08.606741 1932 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:23:08.606884 kubelet[1932]: I0412 18:23:08.606838 1932 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:23:08.606884 kubelet[1932]: I0412 18:23:08.606857 1932 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:23:08.606884 kubelet[1932]: I0412 18:23:08.606876 1932 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:23:08.606957 kubelet[1932]: I0412 18:23:08.606889 1932 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:23:08.607815 kubelet[1932]: I0412 18:23:08.607791 1932 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:23:08.608305 kubelet[1932]: I0412 18:23:08.608285 1932 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:23:08.608804 kubelet[1932]: I0412 18:23:08.608774 1932 server.go:1256] "Started kubelet" Apr 12 18:23:08.610401 kubelet[1932]: I0412 18:23:08.610376 1932 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:23:08.611874 kubelet[1932]: I0412 18:23:08.611854 1932 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:23:08.612036 kubelet[1932]: I0412 18:23:08.611066 1932 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:23:08.621709 kubelet[1932]: I0412 18:23:08.621686 1932 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:23:08.622906 kubelet[1932]: I0412 18:23:08.622886 1932 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:23:08.623223 kubelet[1932]: I0412 18:23:08.623209 1932 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:23:08.623319 kubelet[1932]: I0412 18:23:08.611096 1932 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:23:08.623590 kubelet[1932]: I0412 18:23:08.623561 1932 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:23:08.632577 kubelet[1932]: I0412 18:23:08.632547 1932 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:23:08.632720 kubelet[1932]: I0412 18:23:08.632682 1932 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:23:08.637521 kubelet[1932]: I0412 18:23:08.637490 1932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:23:08.639041 kubelet[1932]: I0412 18:23:08.638597 1932 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:23:08.639801 kubelet[1932]: I0412 18:23:08.639769 1932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:23:08.639801 kubelet[1932]: I0412 18:23:08.639794 1932 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:23:08.640552 kubelet[1932]: I0412 18:23:08.639813 1932 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:23:08.640552 kubelet[1932]: E0412 18:23:08.639866 1932 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:23:08.641622 kubelet[1932]: E0412 18:23:08.641595 1932 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:23:08.691019 kubelet[1932]: I0412 18:23:08.690992 1932 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:23:08.691019 kubelet[1932]: I0412 18:23:08.691016 1932 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:23:08.691019 kubelet[1932]: I0412 18:23:08.691033 1932 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:23:08.691226 kubelet[1932]: I0412 18:23:08.691167 1932 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:23:08.691226 kubelet[1932]: I0412 18:23:08.691186 1932 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 12 18:23:08.691226 kubelet[1932]: I0412 18:23:08.691192 1932 policy_none.go:49] "None policy: Start" Apr 12 18:23:08.692130 kubelet[1932]: I0412 18:23:08.692099 1932 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:23:08.692219 kubelet[1932]: I0412 18:23:08.692151 1932 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:23:08.692362 kubelet[1932]: I0412 18:23:08.692347 1932 state_mem.go:75] "Updated machine memory state" Apr 12 18:23:08.696955 kubelet[1932]: I0412 18:23:08.696926 1932 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:23:08.697936 kubelet[1932]: I0412 18:23:08.697920 1932 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:23:08.706563 sudo[1962]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:23:08.706796 sudo[1962]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:23:08.722305 kubelet[1932]: I0412 18:23:08.722286 1932 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:23:08.730015 kubelet[1932]: I0412 18:23:08.729979 1932 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Apr 12 18:23:08.730286 kubelet[1932]: I0412 18:23:08.730272 1932 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 12 18:23:08.739979 kubelet[1932]: I0412 18:23:08.739948 1932 topology_manager.go:215] "Topology Admit Handler" podUID="6343dd632a9027f2776d2ccca1450238" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 12 18:23:08.740072 kubelet[1932]: I0412 18:23:08.740043 1932 topology_manager.go:215] "Topology Admit Handler" podUID="f4e8212a5db7e0401319814fa9ad65c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 12 18:23:08.740131 kubelet[1932]: I0412 18:23:08.740110 1932 topology_manager.go:215] "Topology Admit Handler" podUID="5d5c5aff921df216fcba2c51c322ceb1" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 12 18:23:08.746581 kubelet[1932]: E0412 18:23:08.746520 1932 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 12 18:23:08.747393 kubelet[1932]: E0412 18:23:08.747373 1932 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 12 18:23:08.924918 kubelet[1932]: I0412 18:23:08.924825 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6343dd632a9027f2776d2ccca1450238-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6343dd632a9027f2776d2ccca1450238\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:23:08.926937 kubelet[1932]: I0412 18:23:08.925078 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:23:08.926937 kubelet[1932]: I0412 18:23:08.925154 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:23:08.926937 kubelet[1932]: I0412 18:23:08.925183 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:23:08.926937 kubelet[1932]: I0412 18:23:08.925210 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6343dd632a9027f2776d2ccca1450238-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6343dd632a9027f2776d2ccca1450238\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:23:08.926937 kubelet[1932]: I0412 18:23:08.925233 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6343dd632a9027f2776d2ccca1450238-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6343dd632a9027f2776d2ccca1450238\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:23:08.927182 kubelet[1932]: I0412 18:23:08.925257 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:23:08.927182 kubelet[1932]: I0412 18:23:08.925279 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:23:08.927182 kubelet[1932]: I0412 18:23:08.925301 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d5c5aff921df216fcba2c51c322ceb1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5d5c5aff921df216fcba2c51c322ceb1\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:23:09.048648 kubelet[1932]: E0412 18:23:09.048606 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:09.049376 kubelet[1932]: E0412 18:23:09.049342 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:09.050897 kubelet[1932]: E0412 18:23:09.049494 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:09.195384 sudo[1962]: pam_unix(sudo:session): session closed for user root Apr 12 18:23:09.607230 kubelet[1932]: I0412 18:23:09.607152 1932 apiserver.go:52] "Watching apiserver" Apr 12 18:23:09.624130 kubelet[1932]: I0412 18:23:09.624100 1932 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:23:09.659062 kubelet[1932]: E0412 18:23:09.659021 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:09.659654 kubelet[1932]: E0412 18:23:09.659607 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:09.671656 kubelet[1932]: E0412 18:23:09.669658 1932 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 12 18:23:09.671656 kubelet[1932]: E0412 18:23:09.670253 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:09.690697 kubelet[1932]: I0412 18:23:09.690665 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.690609226 podStartE2EDuration="3.690609226s" podCreationTimestamp="2024-04-12 18:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:23:09.688959006 +0000 UTC m=+1.149129221" watchObservedRunningTime="2024-04-12 18:23:09.690609226 +0000 UTC m=+1.150779441" Apr 12 18:23:09.712594 kubelet[1932]: I0412 18:23:09.712546 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.712504681 podStartE2EDuration="1.712504681s" podCreationTimestamp="2024-04-12 18:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:23:09.701823567 +0000 UTC m=+1.161993782" watchObservedRunningTime="2024-04-12 18:23:09.712504681 +0000 UTC m=+1.172674856" Apr 12 18:23:09.724123 kubelet[1932]: I0412 18:23:09.724089 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.7240563030000002 podStartE2EDuration="3.724056303s" podCreationTimestamp="2024-04-12 18:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:23:09.713122509 +0000 UTC m=+1.173292724" watchObservedRunningTime="2024-04-12 18:23:09.724056303 +0000 UTC m=+1.184226518" Apr 12 18:23:10.660999 kubelet[1932]: E0412 18:23:10.660962 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:10.663326 sudo[1196]: pam_unix(sudo:session): session closed for user root Apr 12 18:23:10.664895 sshd[1192]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:10.668282 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:46972.service: Deactivated successfully. Apr 12 18:23:10.668951 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:23:10.669088 systemd[1]: session-5.scope: Consumed 5.486s CPU time. Apr 12 18:23:10.669768 systemd-logind[1091]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:23:10.670478 systemd-logind[1091]: Removed session 5. Apr 12 18:23:11.662170 kubelet[1932]: E0412 18:23:11.662140 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:11.671530 kubelet[1932]: E0412 18:23:11.671424 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:15.466290 kubelet[1932]: E0412 18:23:15.466255 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:15.669377 kubelet[1932]: E0412 18:23:15.669012 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:16.239681 kubelet[1932]: E0412 18:23:16.239616 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:16.671132 kubelet[1932]: E0412 18:23:16.670712 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:17.672932 kubelet[1932]: E0412 18:23:17.672892 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:21.679384 kubelet[1932]: E0412 18:23:21.679353 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:21.710607 kubelet[1932]: I0412 18:23:21.710579 1932 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:23:21.711155 env[1106]: time="2024-04-12T18:23:21.711098079Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:23:21.711448 kubelet[1932]: I0412 18:23:21.711281 1932 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:23:22.498969 kubelet[1932]: I0412 18:23:22.498931 1932 topology_manager.go:215] "Topology Admit Handler" podUID="b53d3350-5cf2-48a9-8b2e-c12952fb4f64" podNamespace="kube-system" podName="kube-proxy-8755d" Apr 12 18:23:22.503711 systemd[1]: Created slice kubepods-besteffort-podb53d3350_5cf2_48a9_8b2e_c12952fb4f64.slice. Apr 12 18:23:22.509754 kubelet[1932]: I0412 18:23:22.509712 1932 topology_manager.go:215] "Topology Admit Handler" podUID="2eaeea72-e073-4388-ad73-2cdbe45859b6" podNamespace="kube-system" podName="cilium-nnxhv" Apr 12 18:23:22.514503 systemd[1]: Created slice kubepods-burstable-pod2eaeea72_e073_4388_ad73_2cdbe45859b6.slice. Apr 12 18:23:22.520592 kubelet[1932]: I0412 18:23:22.520541 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-cgroup\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520592 kubelet[1932]: I0412 18:23:22.520590 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cni-path\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520758 kubelet[1932]: I0412 18:23:22.520611 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2eaeea72-e073-4388-ad73-2cdbe45859b6-hubble-tls\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520758 kubelet[1932]: I0412 18:23:22.520649 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-hostproc\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520758 kubelet[1932]: I0412 18:23:22.520671 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2eaeea72-e073-4388-ad73-2cdbe45859b6-clustermesh-secrets\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520758 kubelet[1932]: I0412 18:23:22.520691 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8ppt\" (UniqueName: \"kubernetes.io/projected/2eaeea72-e073-4388-ad73-2cdbe45859b6-kube-api-access-k8ppt\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520758 kubelet[1932]: I0412 18:23:22.520711 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-etc-cni-netd\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520758 kubelet[1932]: I0412 18:23:22.520730 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b53d3350-5cf2-48a9-8b2e-c12952fb4f64-kube-proxy\") pod \"kube-proxy-8755d\" (UID: \"b53d3350-5cf2-48a9-8b2e-c12952fb4f64\") " pod="kube-system/kube-proxy-8755d" Apr 12 18:23:22.520893 kubelet[1932]: I0412 18:23:22.520748 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-run\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520893 kubelet[1932]: I0412 18:23:22.520767 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-lib-modules\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520893 kubelet[1932]: I0412 18:23:22.520787 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-xtables-lock\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520893 kubelet[1932]: I0412 18:23:22.520807 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-host-proc-sys-kernel\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520893 kubelet[1932]: I0412 18:23:22.520825 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-host-proc-sys-net\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.520893 kubelet[1932]: I0412 18:23:22.520843 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-bpf-maps\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.521044 kubelet[1932]: I0412 18:23:22.520862 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b53d3350-5cf2-48a9-8b2e-c12952fb4f64-xtables-lock\") pod \"kube-proxy-8755d\" (UID: \"b53d3350-5cf2-48a9-8b2e-c12952fb4f64\") " pod="kube-system/kube-proxy-8755d" Apr 12 18:23:22.521044 kubelet[1932]: I0412 18:23:22.520880 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b53d3350-5cf2-48a9-8b2e-c12952fb4f64-lib-modules\") pod \"kube-proxy-8755d\" (UID: \"b53d3350-5cf2-48a9-8b2e-c12952fb4f64\") " pod="kube-system/kube-proxy-8755d" Apr 12 18:23:22.521044 kubelet[1932]: I0412 18:23:22.520898 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-config-path\") pod \"cilium-nnxhv\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " pod="kube-system/cilium-nnxhv" Apr 12 18:23:22.521044 kubelet[1932]: I0412 18:23:22.520920 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxqg\" (UniqueName: \"kubernetes.io/projected/b53d3350-5cf2-48a9-8b2e-c12952fb4f64-kube-api-access-njxqg\") pod \"kube-proxy-8755d\" (UID: \"b53d3350-5cf2-48a9-8b2e-c12952fb4f64\") " pod="kube-system/kube-proxy-8755d" Apr 12 18:23:22.812599 kubelet[1932]: E0412 18:23:22.812493 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:22.813551 env[1106]: time="2024-04-12T18:23:22.813485107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8755d,Uid:b53d3350-5cf2-48a9-8b2e-c12952fb4f64,Namespace:kube-system,Attempt:0,}" Apr 12 18:23:22.817143 kubelet[1932]: E0412 18:23:22.817012 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:22.817747 env[1106]: time="2024-04-12T18:23:22.817483239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nnxhv,Uid:2eaeea72-e073-4388-ad73-2cdbe45859b6,Namespace:kube-system,Attempt:0,}" Apr 12 18:23:22.835023 env[1106]: time="2024-04-12T18:23:22.832615836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:23:22.835023 env[1106]: time="2024-04-12T18:23:22.832666197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:23:22.835023 env[1106]: time="2024-04-12T18:23:22.832676437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:23:22.835713 env[1106]: time="2024-04-12T18:23:22.835609956Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85de25b87b9db60539a4ebf12afb6fa190f41045dc6a63d802a13f55a5cd85ae pid=2024 runtime=io.containerd.runc.v2 Apr 12 18:23:22.839682 env[1106]: time="2024-04-12T18:23:22.838321351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:23:22.839682 env[1106]: time="2024-04-12T18:23:22.838354151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:23:22.839682 env[1106]: time="2024-04-12T18:23:22.838364991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:23:22.839682 env[1106]: time="2024-04-12T18:23:22.838491033Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d pid=2040 runtime=io.containerd.runc.v2 Apr 12 18:23:22.858666 systemd[1]: Started cri-containerd-85de25b87b9db60539a4ebf12afb6fa190f41045dc6a63d802a13f55a5cd85ae.scope. Apr 12 18:23:22.859099 kubelet[1932]: I0412 18:23:22.859067 1932 topology_manager.go:215] "Topology Admit Handler" podUID="b8102758-b16d-42c9-afd8-1750f32c78fa" podNamespace="kube-system" podName="cilium-operator-5cc964979-nss7f" Apr 12 18:23:22.869068 systemd[1]: Created slice kubepods-besteffort-podb8102758_b16d_42c9_afd8_1750f32c78fa.slice. Apr 12 18:23:22.876776 systemd[1]: Started cri-containerd-d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d.scope. Apr 12 18:23:22.910918 env[1106]: time="2024-04-12T18:23:22.910873138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8755d,Uid:b53d3350-5cf2-48a9-8b2e-c12952fb4f64,Namespace:kube-system,Attempt:0,} returns sandbox id \"85de25b87b9db60539a4ebf12afb6fa190f41045dc6a63d802a13f55a5cd85ae\"" Apr 12 18:23:22.911596 kubelet[1932]: E0412 18:23:22.911578 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:22.912186 env[1106]: time="2024-04-12T18:23:22.912141635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nnxhv,Uid:2eaeea72-e073-4388-ad73-2cdbe45859b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\"" Apr 12 18:23:22.912835 kubelet[1932]: E0412 18:23:22.912680 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:22.914045 env[1106]: time="2024-04-12T18:23:22.913988459Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:23:22.915270 env[1106]: time="2024-04-12T18:23:22.915182914Z" level=info msg="CreateContainer within sandbox \"85de25b87b9db60539a4ebf12afb6fa190f41045dc6a63d802a13f55a5cd85ae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:23:22.923675 kubelet[1932]: I0412 18:23:22.923635 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldt7c\" (UniqueName: \"kubernetes.io/projected/b8102758-b16d-42c9-afd8-1750f32c78fa-kube-api-access-ldt7c\") pod \"cilium-operator-5cc964979-nss7f\" (UID: \"b8102758-b16d-42c9-afd8-1750f32c78fa\") " pod="kube-system/cilium-operator-5cc964979-nss7f" Apr 12 18:23:22.923675 kubelet[1932]: I0412 18:23:22.923681 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8102758-b16d-42c9-afd8-1750f32c78fa-cilium-config-path\") pod \"cilium-operator-5cc964979-nss7f\" (UID: \"b8102758-b16d-42c9-afd8-1750f32c78fa\") " pod="kube-system/cilium-operator-5cc964979-nss7f" Apr 12 18:23:22.930176 env[1106]: time="2024-04-12T18:23:22.930092749Z" level=info msg="CreateContainer within sandbox \"85de25b87b9db60539a4ebf12afb6fa190f41045dc6a63d802a13f55a5cd85ae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d55dfc47272a59a4609c0c29d1d2e6849a162e1afd29bfe909c693f989aa7b6f\"" Apr 12 18:23:22.931413 env[1106]: time="2024-04-12T18:23:22.930932000Z" level=info msg="StartContainer for \"d55dfc47272a59a4609c0c29d1d2e6849a162e1afd29bfe909c693f989aa7b6f\"" Apr 12 18:23:22.945991 systemd[1]: Started cri-containerd-d55dfc47272a59a4609c0c29d1d2e6849a162e1afd29bfe909c693f989aa7b6f.scope. Apr 12 18:23:22.995447 env[1106]: time="2024-04-12T18:23:22.995401641Z" level=info msg="StartContainer for \"d55dfc47272a59a4609c0c29d1d2e6849a162e1afd29bfe909c693f989aa7b6f\" returns successfully" Apr 12 18:23:23.012784 update_engine[1094]: I0412 18:23:23.012739 1094 update_attempter.cc:509] Updating boot flags... Apr 12 18:23:23.171570 kubelet[1932]: E0412 18:23:23.171539 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:23.172083 env[1106]: time="2024-04-12T18:23:23.172045077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-nss7f,Uid:b8102758-b16d-42c9-afd8-1750f32c78fa,Namespace:kube-system,Attempt:0,}" Apr 12 18:23:23.189516 env[1106]: time="2024-04-12T18:23:23.189439573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:23:23.189516 env[1106]: time="2024-04-12T18:23:23.189480333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:23:23.189516 env[1106]: time="2024-04-12T18:23:23.189492494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:23:23.189778 env[1106]: time="2024-04-12T18:23:23.189619175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46 pid=2190 runtime=io.containerd.runc.v2 Apr 12 18:23:23.199449 systemd[1]: Started cri-containerd-6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46.scope. Apr 12 18:23:23.244783 env[1106]: time="2024-04-12T18:23:23.244742779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-nss7f,Uid:b8102758-b16d-42c9-afd8-1750f32c78fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46\"" Apr 12 18:23:23.246001 kubelet[1932]: E0412 18:23:23.245554 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:23.684273 kubelet[1932]: E0412 18:23:23.684246 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:26.373070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3283262865.mount: Deactivated successfully. Apr 12 18:23:28.664111 env[1106]: time="2024-04-12T18:23:28.664038507Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:28.665432 env[1106]: time="2024-04-12T18:23:28.665403480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:28.667886 env[1106]: time="2024-04-12T18:23:28.667856224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:28.668594 env[1106]: time="2024-04-12T18:23:28.668567191Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 12 18:23:28.673607 env[1106]: time="2024-04-12T18:23:28.673578520Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:23:28.674812 env[1106]: time="2024-04-12T18:23:28.674781092Z" level=info msg="CreateContainer within sandbox \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:23:28.687361 env[1106]: time="2024-04-12T18:23:28.687316854Z" level=info msg="CreateContainer within sandbox \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7\"" Apr 12 18:23:28.687869 env[1106]: time="2024-04-12T18:23:28.687837339Z" level=info msg="StartContainer for \"a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7\"" Apr 12 18:23:28.705604 systemd[1]: Started cri-containerd-a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7.scope. Apr 12 18:23:28.781434 env[1106]: time="2024-04-12T18:23:28.781389812Z" level=info msg="StartContainer for \"a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7\" returns successfully" Apr 12 18:23:28.799610 systemd[1]: cri-containerd-a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7.scope: Deactivated successfully. Apr 12 18:23:28.896192 env[1106]: time="2024-04-12T18:23:28.896131051Z" level=info msg="shim disconnected" id=a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7 Apr 12 18:23:28.896478 env[1106]: time="2024-04-12T18:23:28.896456734Z" level=warning msg="cleaning up after shim disconnected" id=a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7 namespace=k8s.io Apr 12 18:23:28.896550 env[1106]: time="2024-04-12T18:23:28.896536375Z" level=info msg="cleaning up dead shim" Apr 12 18:23:28.905565 env[1106]: time="2024-04-12T18:23:28.905530743Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:23:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2364 runtime=io.containerd.runc.v2\n" Apr 12 18:23:29.682366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7-rootfs.mount: Deactivated successfully. Apr 12 18:23:29.697020 kubelet[1932]: E0412 18:23:29.696840 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:29.701980 env[1106]: time="2024-04-12T18:23:29.701845206Z" level=info msg="CreateContainer within sandbox \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:23:29.714121 env[1106]: time="2024-04-12T18:23:29.714080200Z" level=info msg="CreateContainer within sandbox \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b\"" Apr 12 18:23:29.716658 env[1106]: time="2024-04-12T18:23:29.714830527Z" level=info msg="StartContainer for \"a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b\"" Apr 12 18:23:29.726651 kubelet[1932]: I0412 18:23:29.726466 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8755d" podStartSLOduration=7.726419315 podStartE2EDuration="7.726419315s" podCreationTimestamp="2024-04-12 18:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:23:23.695782417 +0000 UTC m=+15.155952632" watchObservedRunningTime="2024-04-12 18:23:29.726419315 +0000 UTC m=+21.186589530" Apr 12 18:23:29.755474 systemd[1]: Started cri-containerd-a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b.scope. Apr 12 18:23:29.798154 env[1106]: time="2024-04-12T18:23:29.798098344Z" level=info msg="StartContainer for \"a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b\" returns successfully" Apr 12 18:23:29.811300 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:23:29.811515 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:23:29.811698 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:23:29.813206 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:23:29.815645 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:23:29.816356 systemd[1]: cri-containerd-a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b.scope: Deactivated successfully. Apr 12 18:23:29.825264 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:23:29.840916 env[1106]: time="2024-04-12T18:23:29.840859502Z" level=info msg="shim disconnected" id=a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b Apr 12 18:23:29.840916 env[1106]: time="2024-04-12T18:23:29.840910143Z" level=warning msg="cleaning up after shim disconnected" id=a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b namespace=k8s.io Apr 12 18:23:29.840916 env[1106]: time="2024-04-12T18:23:29.840919863Z" level=info msg="cleaning up dead shim" Apr 12 18:23:29.847489 env[1106]: time="2024-04-12T18:23:29.847443683Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:23:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2429 runtime=io.containerd.runc.v2\n" Apr 12 18:23:30.194655 env[1106]: time="2024-04-12T18:23:30.194591280Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:30.195872 env[1106]: time="2024-04-12T18:23:30.195824211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:30.198508 env[1106]: time="2024-04-12T18:23:30.198087271Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:23:30.198878 env[1106]: time="2024-04-12T18:23:30.198779917Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 12 18:23:30.201705 env[1106]: time="2024-04-12T18:23:30.201565982Z" level=info msg="CreateContainer within sandbox \"6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:23:30.210698 env[1106]: time="2024-04-12T18:23:30.210663383Z" level=info msg="CreateContainer within sandbox \"6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\"" Apr 12 18:23:30.211665 env[1106]: time="2024-04-12T18:23:30.211218068Z" level=info msg="StartContainer for \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\"" Apr 12 18:23:30.225761 systemd[1]: Started cri-containerd-87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866.scope. Apr 12 18:23:30.273803 env[1106]: time="2024-04-12T18:23:30.273754305Z" level=info msg="StartContainer for \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\" returns successfully" Apr 12 18:23:30.683013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b-rootfs.mount: Deactivated successfully. Apr 12 18:23:30.700030 kubelet[1932]: E0412 18:23:30.700000 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:30.701640 kubelet[1932]: E0412 18:23:30.701608 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:30.703368 env[1106]: time="2024-04-12T18:23:30.703318174Z" level=info msg="CreateContainer within sandbox \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:23:30.717663 env[1106]: time="2024-04-12T18:23:30.717608581Z" level=info msg="CreateContainer within sandbox \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01\"" Apr 12 18:23:30.718221 env[1106]: time="2024-04-12T18:23:30.718194666Z" level=info msg="StartContainer for \"1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01\"" Apr 12 18:23:30.748560 kubelet[1932]: I0412 18:23:30.748520 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-nss7f" podStartSLOduration=1.7957656549999998 podStartE2EDuration="8.748476616s" podCreationTimestamp="2024-04-12 18:23:22 +0000 UTC" firstStartedPulling="2024-04-12 18:23:23.246237958 +0000 UTC m=+14.706408133" lastFinishedPulling="2024-04-12 18:23:30.198948919 +0000 UTC m=+21.659119094" observedRunningTime="2024-04-12 18:23:30.719959602 +0000 UTC m=+22.180129817" watchObservedRunningTime="2024-04-12 18:23:30.748476616 +0000 UTC m=+22.208646831" Apr 12 18:23:30.772899 systemd[1]: Started cri-containerd-1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01.scope. Apr 12 18:23:30.848807 env[1106]: time="2024-04-12T18:23:30.848754870Z" level=info msg="StartContainer for \"1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01\" returns successfully" Apr 12 18:23:30.862568 systemd[1]: cri-containerd-1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01.scope: Deactivated successfully. Apr 12 18:23:30.909559 env[1106]: time="2024-04-12T18:23:30.909511251Z" level=info msg="shim disconnected" id=1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01 Apr 12 18:23:30.909791 env[1106]: time="2024-04-12T18:23:30.909773574Z" level=warning msg="cleaning up after shim disconnected" id=1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01 namespace=k8s.io Apr 12 18:23:30.909854 env[1106]: time="2024-04-12T18:23:30.909840974Z" level=info msg="cleaning up dead shim" Apr 12 18:23:30.918000 env[1106]: time="2024-04-12T18:23:30.917959727Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:23:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2522 runtime=io.containerd.runc.v2\n" Apr 12 18:23:31.685562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01-rootfs.mount: Deactivated successfully. Apr 12 18:23:31.705016 kubelet[1932]: E0412 18:23:31.704981 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:31.705337 kubelet[1932]: E0412 18:23:31.705224 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:31.708200 env[1106]: time="2024-04-12T18:23:31.708145818Z" level=info msg="CreateContainer within sandbox \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:23:31.719596 env[1106]: time="2024-04-12T18:23:31.719551595Z" level=info msg="CreateContainer within sandbox \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f\"" Apr 12 18:23:31.720422 env[1106]: time="2024-04-12T18:23:31.720392202Z" level=info msg="StartContainer for \"560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f\"" Apr 12 18:23:31.739342 systemd[1]: Started cri-containerd-560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f.scope. Apr 12 18:23:31.775050 systemd[1]: cri-containerd-560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f.scope: Deactivated successfully. Apr 12 18:23:31.776394 env[1106]: time="2024-04-12T18:23:31.776338480Z" level=info msg="StartContainer for \"560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f\" returns successfully" Apr 12 18:23:31.802523 env[1106]: time="2024-04-12T18:23:31.802474022Z" level=info msg="shim disconnected" id=560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f Apr 12 18:23:31.802780 env[1106]: time="2024-04-12T18:23:31.802761625Z" level=warning msg="cleaning up after shim disconnected" id=560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f namespace=k8s.io Apr 12 18:23:31.802857 env[1106]: time="2024-04-12T18:23:31.802844826Z" level=info msg="cleaning up dead shim" Apr 12 18:23:31.809799 env[1106]: time="2024-04-12T18:23:31.809761325Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:23:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2576 runtime=io.containerd.runc.v2\n" Apr 12 18:23:32.693116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f-rootfs.mount: Deactivated successfully. Apr 12 18:23:32.708542 kubelet[1932]: E0412 18:23:32.708508 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:32.710776 env[1106]: time="2024-04-12T18:23:32.710730154Z" level=info msg="CreateContainer within sandbox \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:23:32.727974 env[1106]: time="2024-04-12T18:23:32.727907575Z" level=info msg="CreateContainer within sandbox \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba\"" Apr 12 18:23:32.728545 env[1106]: time="2024-04-12T18:23:32.728516740Z" level=info msg="StartContainer for \"49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba\"" Apr 12 18:23:32.743580 systemd[1]: Started cri-containerd-49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba.scope. Apr 12 18:23:32.780672 env[1106]: time="2024-04-12T18:23:32.779678398Z" level=info msg="StartContainer for \"49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba\" returns successfully" Apr 12 18:23:32.960018 kubelet[1932]: I0412 18:23:32.959206 1932 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 12 18:23:32.978054 kubelet[1932]: I0412 18:23:32.977398 1932 topology_manager.go:215] "Topology Admit Handler" podUID="5273652f-6bfe-4b8b-b027-c246c70b44f5" podNamespace="kube-system" podName="coredns-76f75df574-cwlcx" Apr 12 18:23:32.982706 systemd[1]: Created slice kubepods-burstable-pod5273652f_6bfe_4b8b_b027_c246c70b44f5.slice. Apr 12 18:23:32.985552 kubelet[1932]: I0412 18:23:32.985520 1932 topology_manager.go:215] "Topology Admit Handler" podUID="cc7db979-e513-4afe-b8d8-a73ff83037b8" podNamespace="kube-system" podName="coredns-76f75df574-zw72r" Apr 12 18:23:32.990717 systemd[1]: Created slice kubepods-burstable-podcc7db979_e513_4afe_b8d8_a73ff83037b8.slice. Apr 12 18:23:33.042661 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:23:33.094987 kubelet[1932]: I0412 18:23:33.094952 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc7db979-e513-4afe-b8d8-a73ff83037b8-config-volume\") pod \"coredns-76f75df574-zw72r\" (UID: \"cc7db979-e513-4afe-b8d8-a73ff83037b8\") " pod="kube-system/coredns-76f75df574-zw72r" Apr 12 18:23:33.095102 kubelet[1932]: I0412 18:23:33.095034 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctt8l\" (UniqueName: \"kubernetes.io/projected/cc7db979-e513-4afe-b8d8-a73ff83037b8-kube-api-access-ctt8l\") pod \"coredns-76f75df574-zw72r\" (UID: \"cc7db979-e513-4afe-b8d8-a73ff83037b8\") " pod="kube-system/coredns-76f75df574-zw72r" Apr 12 18:23:33.095102 kubelet[1932]: I0412 18:23:33.095067 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqbjz\" (UniqueName: \"kubernetes.io/projected/5273652f-6bfe-4b8b-b027-c246c70b44f5-kube-api-access-tqbjz\") pod \"coredns-76f75df574-cwlcx\" (UID: \"5273652f-6bfe-4b8b-b027-c246c70b44f5\") " pod="kube-system/coredns-76f75df574-cwlcx" Apr 12 18:23:33.095156 kubelet[1932]: I0412 18:23:33.095132 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5273652f-6bfe-4b8b-b027-c246c70b44f5-config-volume\") pod \"coredns-76f75df574-cwlcx\" (UID: \"5273652f-6bfe-4b8b-b027-c246c70b44f5\") " pod="kube-system/coredns-76f75df574-cwlcx" Apr 12 18:23:33.286255 kubelet[1932]: E0412 18:23:33.286159 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:33.287002 env[1106]: time="2024-04-12T18:23:33.286960126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cwlcx,Uid:5273652f-6bfe-4b8b-b027-c246c70b44f5,Namespace:kube-system,Attempt:0,}" Apr 12 18:23:33.293473 kubelet[1932]: E0412 18:23:33.293433 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:33.294004 env[1106]: time="2024-04-12T18:23:33.293957101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zw72r,Uid:cc7db979-e513-4afe-b8d8-a73ff83037b8,Namespace:kube-system,Attempt:0,}" Apr 12 18:23:33.308656 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:23:33.716679 kubelet[1932]: E0412 18:23:33.716211 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:33.731302 kubelet[1932]: I0412 18:23:33.731254 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-nnxhv" podStartSLOduration=5.973056797 podStartE2EDuration="11.731212366s" podCreationTimestamp="2024-04-12 18:23:22 +0000 UTC" firstStartedPulling="2024-04-12 18:23:22.913435131 +0000 UTC m=+14.373605346" lastFinishedPulling="2024-04-12 18:23:28.6715907 +0000 UTC m=+20.131760915" observedRunningTime="2024-04-12 18:23:33.730726042 +0000 UTC m=+25.190896257" watchObservedRunningTime="2024-04-12 18:23:33.731212366 +0000 UTC m=+25.191382581" Apr 12 18:23:34.138679 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:48050.service. Apr 12 18:23:34.182006 sshd[2757]: Accepted publickey for core from 10.0.0.1 port 48050 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:23:34.183761 sshd[2757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:34.188535 systemd-logind[1091]: New session 6 of user core. Apr 12 18:23:34.189171 systemd[1]: Started session-6.scope. Apr 12 18:23:34.354182 sshd[2757]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:34.357756 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:48050.service: Deactivated successfully. Apr 12 18:23:34.358484 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:23:34.360230 systemd-logind[1091]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:23:34.362829 systemd-logind[1091]: Removed session 6. Apr 12 18:23:34.718258 kubelet[1932]: E0412 18:23:34.718222 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:34.965265 systemd-networkd[1001]: cilium_host: Link UP Apr 12 18:23:34.965802 systemd-networkd[1001]: cilium_net: Link UP Apr 12 18:23:34.966466 systemd-networkd[1001]: cilium_net: Gained carrier Apr 12 18:23:34.967526 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Apr 12 18:23:34.970722 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:23:34.967147 systemd-networkd[1001]: cilium_host: Gained carrier Apr 12 18:23:35.059949 systemd-networkd[1001]: cilium_vxlan: Link UP Apr 12 18:23:35.059955 systemd-networkd[1001]: cilium_vxlan: Gained carrier Apr 12 18:23:35.115871 systemd-networkd[1001]: cilium_net: Gained IPv6LL Apr 12 18:23:35.341694 kernel: NET: Registered PF_ALG protocol family Apr 12 18:23:35.363907 systemd-networkd[1001]: cilium_host: Gained IPv6LL Apr 12 18:23:35.719257 kubelet[1932]: E0412 18:23:35.719153 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:35.920594 systemd-networkd[1001]: lxc_health: Link UP Apr 12 18:23:35.929663 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:23:35.929689 systemd-networkd[1001]: lxc_health: Gained carrier Apr 12 18:23:36.371202 systemd-networkd[1001]: lxc8baf9188cd14: Link UP Apr 12 18:23:36.378655 kernel: eth0: renamed from tmpfd96b Apr 12 18:23:36.396824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 12 18:23:36.396916 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8baf9188cd14: link becomes ready Apr 12 18:23:36.396938 kernel: eth0: renamed from tmp8f51e Apr 12 18:23:36.405828 systemd-networkd[1001]: lxc4538a9817c23: Link UP Apr 12 18:23:36.406204 systemd-networkd[1001]: lxc8baf9188cd14: Gained carrier Apr 12 18:23:36.408414 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 12 18:23:36.408485 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4538a9817c23: link becomes ready Apr 12 18:23:36.408854 systemd-networkd[1001]: lxc4538a9817c23: Gained carrier Apr 12 18:23:36.820461 kubelet[1932]: E0412 18:23:36.820297 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:36.844835 systemd-networkd[1001]: cilium_vxlan: Gained IPv6LL Apr 12 18:23:37.722740 kubelet[1932]: E0412 18:23:37.722703 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:37.868813 systemd-networkd[1001]: lxc_health: Gained IPv6LL Apr 12 18:23:38.059746 systemd-networkd[1001]: lxc8baf9188cd14: Gained IPv6LL Apr 12 18:23:38.315761 systemd-networkd[1001]: lxc4538a9817c23: Gained IPv6LL Apr 12 18:23:39.359017 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:53214.service. Apr 12 18:23:39.397635 sshd[3148]: Accepted publickey for core from 10.0.0.1 port 53214 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:23:39.402652 sshd[3148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:39.405936 systemd-logind[1091]: New session 7 of user core. Apr 12 18:23:39.406727 systemd[1]: Started session-7.scope. Apr 12 18:23:39.525298 sshd[3148]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:39.527529 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:23:39.528046 systemd-logind[1091]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:23:39.528165 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:53214.service: Deactivated successfully. Apr 12 18:23:39.529149 systemd-logind[1091]: Removed session 7. Apr 12 18:23:39.922375 env[1106]: time="2024-04-12T18:23:39.922303333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:23:39.922375 env[1106]: time="2024-04-12T18:23:39.922347573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:23:39.922802 env[1106]: time="2024-04-12T18:23:39.922357933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:23:39.922802 env[1106]: time="2024-04-12T18:23:39.922553495Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f51e7b2b128ad3fb8688b9905d71f3d8a71874a7e8408fdd798b0207416aac5 pid=3173 runtime=io.containerd.runc.v2 Apr 12 18:23:39.929694 env[1106]: time="2024-04-12T18:23:39.929620539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:23:39.929787 env[1106]: time="2024-04-12T18:23:39.929699739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:23:39.929787 env[1106]: time="2024-04-12T18:23:39.929726659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:23:39.930070 env[1106]: time="2024-04-12T18:23:39.929947741Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd96be1111f35f24cd48fe1798007766cae40ebe97aa7d5deba0c21dee393fb2 pid=3192 runtime=io.containerd.runc.v2 Apr 12 18:23:39.938778 systemd[1]: Started cri-containerd-8f51e7b2b128ad3fb8688b9905d71f3d8a71874a7e8408fdd798b0207416aac5.scope. Apr 12 18:23:39.947892 systemd[1]: Started cri-containerd-fd96be1111f35f24cd48fe1798007766cae40ebe97aa7d5deba0c21dee393fb2.scope. Apr 12 18:23:39.984944 systemd-resolved[1047]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:23:39.986314 systemd-resolved[1047]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:23:40.004122 env[1106]: time="2024-04-12T18:23:40.004080001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cwlcx,Uid:5273652f-6bfe-4b8b-b027-c246c70b44f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd96be1111f35f24cd48fe1798007766cae40ebe97aa7d5deba0c21dee393fb2\"" Apr 12 18:23:40.004278 env[1106]: time="2024-04-12T18:23:40.004176401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zw72r,Uid:cc7db979-e513-4afe-b8d8-a73ff83037b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f51e7b2b128ad3fb8688b9905d71f3d8a71874a7e8408fdd798b0207416aac5\"" Apr 12 18:23:40.005228 kubelet[1932]: E0412 18:23:40.004906 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:40.005228 kubelet[1932]: E0412 18:23:40.005052 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:40.008097 env[1106]: time="2024-04-12T18:23:40.008040304Z" level=info msg="CreateContainer within sandbox \"8f51e7b2b128ad3fb8688b9905d71f3d8a71874a7e8408fdd798b0207416aac5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:23:40.008426 env[1106]: time="2024-04-12T18:23:40.008301106Z" level=info msg="CreateContainer within sandbox \"fd96be1111f35f24cd48fe1798007766cae40ebe97aa7d5deba0c21dee393fb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:23:40.032844 env[1106]: time="2024-04-12T18:23:40.032795853Z" level=info msg="CreateContainer within sandbox \"fd96be1111f35f24cd48fe1798007766cae40ebe97aa7d5deba0c21dee393fb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"92a2c6fa56e975dd67d199e840548f8a34155b81b3566c0ebdf5141529e2e686\"" Apr 12 18:23:40.033680 env[1106]: time="2024-04-12T18:23:40.033469337Z" level=info msg="StartContainer for \"92a2c6fa56e975dd67d199e840548f8a34155b81b3566c0ebdf5141529e2e686\"" Apr 12 18:23:40.035229 env[1106]: time="2024-04-12T18:23:40.035181867Z" level=info msg="CreateContainer within sandbox \"8f51e7b2b128ad3fb8688b9905d71f3d8a71874a7e8408fdd798b0207416aac5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"adc7823d70b446bfe194b0f38239131b476991c1d4b8035101059de793d75d46\"" Apr 12 18:23:40.037250 env[1106]: time="2024-04-12T18:23:40.036203553Z" level=info msg="StartContainer for \"adc7823d70b446bfe194b0f38239131b476991c1d4b8035101059de793d75d46\"" Apr 12 18:23:40.048260 systemd[1]: Started cri-containerd-92a2c6fa56e975dd67d199e840548f8a34155b81b3566c0ebdf5141529e2e686.scope. Apr 12 18:23:40.057620 systemd[1]: Started cri-containerd-adc7823d70b446bfe194b0f38239131b476991c1d4b8035101059de793d75d46.scope. Apr 12 18:23:40.085183 env[1106]: time="2024-04-12T18:23:40.084787725Z" level=info msg="StartContainer for \"92a2c6fa56e975dd67d199e840548f8a34155b81b3566c0ebdf5141529e2e686\" returns successfully" Apr 12 18:23:40.100301 env[1106]: time="2024-04-12T18:23:40.100242817Z" level=info msg="StartContainer for \"adc7823d70b446bfe194b0f38239131b476991c1d4b8035101059de793d75d46\" returns successfully" Apr 12 18:23:40.728683 kubelet[1932]: E0412 18:23:40.728622 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:40.731367 kubelet[1932]: E0412 18:23:40.731276 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:40.750656 kubelet[1932]: I0412 18:23:40.748267 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-cwlcx" podStartSLOduration=18.748219144 podStartE2EDuration="18.748219144s" podCreationTimestamp="2024-04-12 18:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:23:40.73918129 +0000 UTC m=+32.199351465" watchObservedRunningTime="2024-04-12 18:23:40.748219144 +0000 UTC m=+32.208389359" Apr 12 18:23:40.764110 kubelet[1932]: I0412 18:23:40.764055 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zw72r" podStartSLOduration=18.764015279 podStartE2EDuration="18.764015279s" podCreationTimestamp="2024-04-12 18:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:23:40.763223994 +0000 UTC m=+32.223394209" watchObservedRunningTime="2024-04-12 18:23:40.764015279 +0000 UTC m=+32.224185494" Apr 12 18:23:40.926569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887712648.mount: Deactivated successfully. Apr 12 18:23:41.732963 kubelet[1932]: E0412 18:23:41.732934 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:41.733380 kubelet[1932]: E0412 18:23:41.733001 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:42.734222 kubelet[1932]: E0412 18:23:42.734183 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:42.734911 kubelet[1932]: E0412 18:23:42.734883 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:23:44.529862 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:53228.service. Apr 12 18:23:44.568635 sshd[3332]: Accepted publickey for core from 10.0.0.1 port 53228 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:23:44.569908 sshd[3332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:44.573224 systemd-logind[1091]: New session 8 of user core. Apr 12 18:23:44.574066 systemd[1]: Started session-8.scope. Apr 12 18:23:44.694515 sshd[3332]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:44.696806 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:53228.service: Deactivated successfully. Apr 12 18:23:44.697613 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:23:44.698094 systemd-logind[1091]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:23:44.698791 systemd-logind[1091]: Removed session 8. Apr 12 18:23:49.699756 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:57762.service. Apr 12 18:23:49.739682 sshd[3348]: Accepted publickey for core from 10.0.0.1 port 57762 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:23:49.742295 sshd[3348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:49.746668 systemd[1]: Started session-9.scope. Apr 12 18:23:49.746982 systemd-logind[1091]: New session 9 of user core. Apr 12 18:23:49.857559 sshd[3348]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:49.860953 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:57776.service. Apr 12 18:23:49.863921 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:57762.service: Deactivated successfully. Apr 12 18:23:49.864555 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:23:49.864685 systemd-logind[1091]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:23:49.865678 systemd-logind[1091]: Removed session 9. Apr 12 18:23:49.899844 sshd[3361]: Accepted publickey for core from 10.0.0.1 port 57776 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:23:49.900944 sshd[3361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:49.904263 systemd-logind[1091]: New session 10 of user core. Apr 12 18:23:49.904599 systemd[1]: Started session-10.scope. Apr 12 18:23:50.050146 sshd[3361]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:50.055114 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:57780.service. Apr 12 18:23:50.055658 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:57776.service: Deactivated successfully. Apr 12 18:23:50.056262 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:23:50.056995 systemd-logind[1091]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:23:50.058788 systemd-logind[1091]: Removed session 10. Apr 12 18:23:50.100621 sshd[3373]: Accepted publickey for core from 10.0.0.1 port 57780 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:23:50.102296 sshd[3373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:50.105506 systemd-logind[1091]: New session 11 of user core. Apr 12 18:23:50.106317 systemd[1]: Started session-11.scope. Apr 12 18:23:50.220373 sshd[3373]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:50.222594 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:23:50.223129 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:57780.service: Deactivated successfully. Apr 12 18:23:50.223951 systemd-logind[1091]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:23:50.224508 systemd-logind[1091]: Removed session 11. Apr 12 18:23:55.224838 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:57792.service. Apr 12 18:23:55.261515 sshd[3392]: Accepted publickey for core from 10.0.0.1 port 57792 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:23:55.263080 sshd[3392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:55.266574 systemd-logind[1091]: New session 12 of user core. Apr 12 18:23:55.266835 systemd[1]: Started session-12.scope. Apr 12 18:23:55.371157 sshd[3392]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:55.373499 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:57792.service: Deactivated successfully. Apr 12 18:23:55.374213 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:23:55.374692 systemd-logind[1091]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:23:55.375292 systemd-logind[1091]: Removed session 12. Apr 12 18:24:00.375348 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:54354.service. Apr 12 18:24:00.412127 sshd[3407]: Accepted publickey for core from 10.0.0.1 port 54354 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:00.413517 sshd[3407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:00.416890 systemd-logind[1091]: New session 13 of user core. Apr 12 18:24:00.417228 systemd[1]: Started session-13.scope. Apr 12 18:24:00.519529 sshd[3407]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:00.523265 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:54360.service. Apr 12 18:24:00.523797 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:54354.service: Deactivated successfully. Apr 12 18:24:00.524440 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:24:00.525169 systemd-logind[1091]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:24:00.525883 systemd-logind[1091]: Removed session 13. Apr 12 18:24:00.561077 sshd[3419]: Accepted publickey for core from 10.0.0.1 port 54360 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:00.562089 sshd[3419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:00.565609 systemd-logind[1091]: New session 14 of user core. Apr 12 18:24:00.565805 systemd[1]: Started session-14.scope. Apr 12 18:24:00.780706 sshd[3419]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:00.784367 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:54370.service. Apr 12 18:24:00.784848 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:54360.service: Deactivated successfully. Apr 12 18:24:00.786272 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:24:00.786962 systemd-logind[1091]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:24:00.787780 systemd-logind[1091]: Removed session 14. Apr 12 18:24:00.824764 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 54370 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:00.825846 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:00.828808 systemd-logind[1091]: New session 15 of user core. Apr 12 18:24:00.829580 systemd[1]: Started session-15.scope. Apr 12 18:24:01.977403 sshd[3430]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:01.979871 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:24:01.980340 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:54370.service: Deactivated successfully. Apr 12 18:24:01.981200 systemd-logind[1091]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:24:01.982171 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:54386.service. Apr 12 18:24:01.982935 systemd-logind[1091]: Removed session 15. Apr 12 18:24:02.019407 sshd[3450]: Accepted publickey for core from 10.0.0.1 port 54386 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:02.020610 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:02.023758 systemd-logind[1091]: New session 16 of user core. Apr 12 18:24:02.024550 systemd[1]: Started session-16.scope. Apr 12 18:24:02.257991 sshd[3450]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:02.261241 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:54386.service: Deactivated successfully. Apr 12 18:24:02.261913 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:24:02.262526 systemd-logind[1091]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:24:02.263591 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:54398.service. Apr 12 18:24:02.264545 systemd-logind[1091]: Removed session 16. Apr 12 18:24:02.301172 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 54398 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:02.302284 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:02.305722 systemd-logind[1091]: New session 17 of user core. Apr 12 18:24:02.306117 systemd[1]: Started session-17.scope. Apr 12 18:24:02.415213 sshd[3461]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:02.417576 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:54398.service: Deactivated successfully. Apr 12 18:24:02.418373 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:24:02.418903 systemd-logind[1091]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:24:02.419723 systemd-logind[1091]: Removed session 17. Apr 12 18:24:07.420019 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:54400.service. Apr 12 18:24:07.456779 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 54400 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:07.458052 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:07.461661 systemd-logind[1091]: New session 18 of user core. Apr 12 18:24:07.462592 systemd[1]: Started session-18.scope. Apr 12 18:24:07.565787 sshd[3475]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:07.568112 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:54400.service: Deactivated successfully. Apr 12 18:24:07.568940 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:24:07.569419 systemd-logind[1091]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:24:07.570068 systemd-logind[1091]: Removed session 18. Apr 12 18:24:12.570545 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:53046.service. Apr 12 18:24:12.608075 sshd[3494]: Accepted publickey for core from 10.0.0.1 port 53046 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:12.609188 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:12.612314 systemd-logind[1091]: New session 19 of user core. Apr 12 18:24:12.613212 systemd[1]: Started session-19.scope. Apr 12 18:24:12.722010 sshd[3494]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:12.724275 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:53046.service: Deactivated successfully. Apr 12 18:24:12.725146 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:24:12.725650 systemd-logind[1091]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:24:12.726535 systemd-logind[1091]: Removed session 19. Apr 12 18:24:17.727270 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:53050.service. Apr 12 18:24:17.764552 sshd[3507]: Accepted publickey for core from 10.0.0.1 port 53050 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:17.766079 sshd[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:17.769549 systemd-logind[1091]: New session 20 of user core. Apr 12 18:24:17.770464 systemd[1]: Started session-20.scope. Apr 12 18:24:17.881560 sshd[3507]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:17.886382 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:53050.service: Deactivated successfully. Apr 12 18:24:17.887216 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:24:17.891615 systemd-logind[1091]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:24:17.892356 systemd-logind[1091]: Removed session 20. Apr 12 18:24:22.884845 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:46746.service. Apr 12 18:24:22.921776 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 46746 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:22.922920 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:22.925818 systemd-logind[1091]: New session 21 of user core. Apr 12 18:24:22.926645 systemd[1]: Started session-21.scope. Apr 12 18:24:23.032422 sshd[3520]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:23.035422 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:46758.service. Apr 12 18:24:23.035940 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:46746.service: Deactivated successfully. Apr 12 18:24:23.036701 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:24:23.037387 systemd-logind[1091]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:24:23.038415 systemd-logind[1091]: Removed session 21. Apr 12 18:24:23.072275 sshd[3532]: Accepted publickey for core from 10.0.0.1 port 46758 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:23.073368 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:23.076592 systemd-logind[1091]: New session 22 of user core. Apr 12 18:24:23.077339 systemd[1]: Started session-22.scope. Apr 12 18:24:25.654251 env[1106]: time="2024-04-12T18:24:25.654209233Z" level=info msg="StopContainer for \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\" with timeout 30 (s)" Apr 12 18:24:25.654970 env[1106]: time="2024-04-12T18:24:25.654943077Z" level=info msg="Stop container \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\" with signal terminated" Apr 12 18:24:25.669228 systemd[1]: cri-containerd-87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866.scope: Deactivated successfully. Apr 12 18:24:25.689823 env[1106]: time="2024-04-12T18:24:25.689757923Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:24:25.690940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866-rootfs.mount: Deactivated successfully. Apr 12 18:24:25.695939 env[1106]: time="2024-04-12T18:24:25.695896919Z" level=info msg="StopContainer for \"49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba\" with timeout 2 (s)" Apr 12 18:24:25.696237 env[1106]: time="2024-04-12T18:24:25.696140840Z" level=info msg="Stop container \"49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba\" with signal terminated" Apr 12 18:24:25.704423 env[1106]: time="2024-04-12T18:24:25.704381329Z" level=info msg="shim disconnected" id=87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866 Apr 12 18:24:25.704423 env[1106]: time="2024-04-12T18:24:25.704424489Z" level=warning msg="cleaning up after shim disconnected" id=87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866 namespace=k8s.io Apr 12 18:24:25.704607 env[1106]: time="2024-04-12T18:24:25.704434049Z" level=info msg="cleaning up dead shim" Apr 12 18:24:25.705406 systemd-networkd[1001]: lxc_health: Link DOWN Apr 12 18:24:25.705412 systemd-networkd[1001]: lxc_health: Lost carrier Apr 12 18:24:25.712412 env[1106]: time="2024-04-12T18:24:25.712365456Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3591 runtime=io.containerd.runc.v2\n" Apr 12 18:24:25.714592 env[1106]: time="2024-04-12T18:24:25.714554469Z" level=info msg="StopContainer for \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\" returns successfully" Apr 12 18:24:25.715202 env[1106]: time="2024-04-12T18:24:25.715131513Z" level=info msg="StopPodSandbox for \"6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46\"" Apr 12 18:24:25.715258 env[1106]: time="2024-04-12T18:24:25.715203313Z" level=info msg="Container to stop \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:25.716589 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46-shm.mount: Deactivated successfully. Apr 12 18:24:25.724559 systemd[1]: cri-containerd-6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46.scope: Deactivated successfully. Apr 12 18:24:25.741975 systemd[1]: cri-containerd-49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba.scope: Deactivated successfully. Apr 12 18:24:25.742274 systemd[1]: cri-containerd-49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba.scope: Consumed 6.474s CPU time. Apr 12 18:24:25.749316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46-rootfs.mount: Deactivated successfully. Apr 12 18:24:25.759970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba-rootfs.mount: Deactivated successfully. Apr 12 18:24:25.761265 env[1106]: time="2024-04-12T18:24:25.761208025Z" level=info msg="shim disconnected" id=6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46 Apr 12 18:24:25.762274 env[1106]: time="2024-04-12T18:24:25.761836749Z" level=warning msg="cleaning up after shim disconnected" id=6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46 namespace=k8s.io Apr 12 18:24:25.762274 env[1106]: time="2024-04-12T18:24:25.761856909Z" level=info msg="cleaning up dead shim" Apr 12 18:24:25.770540 env[1106]: time="2024-04-12T18:24:25.770405279Z" level=info msg="shim disconnected" id=49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba Apr 12 18:24:25.770720 env[1106]: time="2024-04-12T18:24:25.770616201Z" level=warning msg="cleaning up after shim disconnected" id=49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba namespace=k8s.io Apr 12 18:24:25.770720 env[1106]: time="2024-04-12T18:24:25.770641881Z" level=info msg="cleaning up dead shim" Apr 12 18:24:25.770892 env[1106]: time="2024-04-12T18:24:25.770861882Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3638 runtime=io.containerd.runc.v2\n" Apr 12 18:24:25.771168 env[1106]: time="2024-04-12T18:24:25.771145164Z" level=info msg="TearDown network for sandbox \"6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46\" successfully" Apr 12 18:24:25.771214 env[1106]: time="2024-04-12T18:24:25.771170084Z" level=info msg="StopPodSandbox for \"6a8dcd859d6c73c98fc3fffaca9095770e2421c7282507c9a5643eebd8e91d46\" returns successfully" Apr 12 18:24:25.784412 env[1106]: time="2024-04-12T18:24:25.784370082Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3651 runtime=io.containerd.runc.v2\n" Apr 12 18:24:25.786164 env[1106]: time="2024-04-12T18:24:25.786132732Z" level=info msg="StopContainer for \"49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba\" returns successfully" Apr 12 18:24:25.786562 env[1106]: time="2024-04-12T18:24:25.786532095Z" level=info msg="StopPodSandbox for \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\"" Apr 12 18:24:25.786765 env[1106]: time="2024-04-12T18:24:25.786744576Z" level=info msg="Container to stop \"560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:25.786868 env[1106]: time="2024-04-12T18:24:25.786850737Z" level=info msg="Container to stop \"a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:25.786933 env[1106]: time="2024-04-12T18:24:25.786915577Z" level=info msg="Container to stop \"1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:25.787009 env[1106]: time="2024-04-12T18:24:25.786993497Z" level=info msg="Container to stop \"a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:25.787076 env[1106]: time="2024-04-12T18:24:25.787056218Z" level=info msg="Container to stop \"49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:25.791986 systemd[1]: cri-containerd-d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d.scope: Deactivated successfully. Apr 12 18:24:25.813916 kubelet[1932]: I0412 18:24:25.813885 1932 scope.go:117] "RemoveContainer" containerID="87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866" Apr 12 18:24:25.815709 env[1106]: time="2024-04-12T18:24:25.815670867Z" level=info msg="RemoveContainer for \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\"" Apr 12 18:24:25.820977 env[1106]: time="2024-04-12T18:24:25.820937218Z" level=info msg="shim disconnected" id=d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d Apr 12 18:24:25.820977 env[1106]: time="2024-04-12T18:24:25.820975898Z" level=warning msg="cleaning up after shim disconnected" id=d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d namespace=k8s.io Apr 12 18:24:25.821089 env[1106]: time="2024-04-12T18:24:25.820985458Z" level=info msg="cleaning up dead shim" Apr 12 18:24:25.823395 env[1106]: time="2024-04-12T18:24:25.823351752Z" level=info msg="RemoveContainer for \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\" returns successfully" Apr 12 18:24:25.824405 kubelet[1932]: I0412 18:24:25.824329 1932 scope.go:117] "RemoveContainer" containerID="87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866" Apr 12 18:24:25.824837 env[1106]: time="2024-04-12T18:24:25.824762361Z" level=error msg="ContainerStatus for \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\": not found" Apr 12 18:24:25.826453 kubelet[1932]: E0412 18:24:25.826427 1932 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\": not found" containerID="87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866" Apr 12 18:24:25.826691 kubelet[1932]: I0412 18:24:25.826663 1932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866"} err="failed to get container status \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\": rpc error: code = NotFound desc = an error occurred when try to find container \"87f047baeb7c818d97cfbd72deae783867535f3ae7a29a92db54f99e14be3866\": not found" Apr 12 18:24:25.830429 env[1106]: time="2024-04-12T18:24:25.830390914Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3682 runtime=io.containerd.runc.v2\n" Apr 12 18:24:25.830733 env[1106]: time="2024-04-12T18:24:25.830708956Z" level=info msg="TearDown network for sandbox \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" successfully" Apr 12 18:24:25.830783 env[1106]: time="2024-04-12T18:24:25.830733436Z" level=info msg="StopPodSandbox for \"d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d\" returns successfully" Apr 12 18:24:25.969887 kubelet[1932]: I0412 18:24:25.969732 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-cgroup\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.969887 kubelet[1932]: I0412 18:24:25.969778 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2eaeea72-e073-4388-ad73-2cdbe45859b6-clustermesh-secrets\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.969887 kubelet[1932]: I0412 18:24:25.969831 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-bpf-maps\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.969887 kubelet[1932]: I0412 18:24:25.969855 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldt7c\" (UniqueName: \"kubernetes.io/projected/b8102758-b16d-42c9-afd8-1750f32c78fa-kube-api-access-ldt7c\") pod \"b8102758-b16d-42c9-afd8-1750f32c78fa\" (UID: \"b8102758-b16d-42c9-afd8-1750f32c78fa\") " Apr 12 18:24:25.969887 kubelet[1932]: I0412 18:24:25.969885 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-lib-modules\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.970104 kubelet[1932]: I0412 18:24:25.969907 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-config-path\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.970104 kubelet[1932]: I0412 18:24:25.969927 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8102758-b16d-42c9-afd8-1750f32c78fa-cilium-config-path\") pod \"b8102758-b16d-42c9-afd8-1750f32c78fa\" (UID: \"b8102758-b16d-42c9-afd8-1750f32c78fa\") " Apr 12 18:24:25.970104 kubelet[1932]: I0412 18:24:25.969956 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-hostproc\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.970104 kubelet[1932]: I0412 18:24:25.969979 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-run\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.970104 kubelet[1932]: I0412 18:24:25.970009 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-host-proc-sys-net\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.970104 kubelet[1932]: I0412 18:24:25.970039 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2eaeea72-e073-4388-ad73-2cdbe45859b6-hubble-tls\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.970244 kubelet[1932]: I0412 18:24:25.970059 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-xtables-lock\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.970244 kubelet[1932]: I0412 18:24:25.970078 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-host-proc-sys-kernel\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.970244 kubelet[1932]: I0412 18:24:25.970096 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cni-path\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.970244 kubelet[1932]: I0412 18:24:25.970121 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-etc-cni-netd\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.970244 kubelet[1932]: I0412 18:24:25.970142 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8ppt\" (UniqueName: \"kubernetes.io/projected/2eaeea72-e073-4388-ad73-2cdbe45859b6-kube-api-access-k8ppt\") pod \"2eaeea72-e073-4388-ad73-2cdbe45859b6\" (UID: \"2eaeea72-e073-4388-ad73-2cdbe45859b6\") " Apr 12 18:24:25.970244 kubelet[1932]: I0412 18:24:25.970227 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:25.970423 kubelet[1932]: I0412 18:24:25.970284 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-hostproc" (OuterVolumeSpecName: "hostproc") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:25.970423 kubelet[1932]: I0412 18:24:25.970397 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:25.970423 kubelet[1932]: I0412 18:24:25.970404 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cni-path" (OuterVolumeSpecName: "cni-path") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:25.970423 kubelet[1932]: I0412 18:24:25.970421 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:25.970541 kubelet[1932]: I0412 18:24:25.970439 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:25.970541 kubelet[1932]: I0412 18:24:25.970438 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:25.970541 kubelet[1932]: I0412 18:24:25.970471 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:25.970693 kubelet[1932]: I0412 18:24:25.970659 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:25.970693 kubelet[1932]: I0412 18:24:25.970690 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:25.974529 kubelet[1932]: I0412 18:24:25.974461 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8102758-b16d-42c9-afd8-1750f32c78fa-kube-api-access-ldt7c" (OuterVolumeSpecName: "kube-api-access-ldt7c") pod "b8102758-b16d-42c9-afd8-1750f32c78fa" (UID: "b8102758-b16d-42c9-afd8-1750f32c78fa"). InnerVolumeSpecName "kube-api-access-ldt7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:24:25.976476 kubelet[1932]: I0412 18:24:25.976436 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:24:25.976620 kubelet[1932]: I0412 18:24:25.976593 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eaeea72-e073-4388-ad73-2cdbe45859b6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:24:25.976987 kubelet[1932]: I0412 18:24:25.976953 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eaeea72-e073-4388-ad73-2cdbe45859b6-kube-api-access-k8ppt" (OuterVolumeSpecName: "kube-api-access-k8ppt") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "kube-api-access-k8ppt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:24:25.978360 kubelet[1932]: I0412 18:24:25.978335 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8102758-b16d-42c9-afd8-1750f32c78fa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b8102758-b16d-42c9-afd8-1750f32c78fa" (UID: "b8102758-b16d-42c9-afd8-1750f32c78fa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:24:25.978858 kubelet[1932]: I0412 18:24:25.978835 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eaeea72-e073-4388-ad73-2cdbe45859b6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2eaeea72-e073-4388-ad73-2cdbe45859b6" (UID: "2eaeea72-e073-4388-ad73-2cdbe45859b6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:24:26.071226 kubelet[1932]: I0412 18:24:26.071166 1932 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071226 kubelet[1932]: I0412 18:24:26.071210 1932 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ldt7c\" (UniqueName: \"kubernetes.io/projected/b8102758-b16d-42c9-afd8-1750f32c78fa-kube-api-access-ldt7c\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071226 kubelet[1932]: I0412 18:24:26.071221 1932 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071226 kubelet[1932]: I0412 18:24:26.071232 1932 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071226 kubelet[1932]: I0412 18:24:26.071244 1932 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8102758-b16d-42c9-afd8-1750f32c78fa-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071553 kubelet[1932]: I0412 18:24:26.071253 1932 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071553 kubelet[1932]: I0412 18:24:26.071261 1932 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071553 kubelet[1932]: I0412 18:24:26.071272 1932 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071553 kubelet[1932]: I0412 18:24:26.071281 1932 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2eaeea72-e073-4388-ad73-2cdbe45859b6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071553 kubelet[1932]: I0412 18:24:26.071292 1932 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071553 kubelet[1932]: I0412 18:24:26.071301 1932 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071553 kubelet[1932]: I0412 18:24:26.071310 1932 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071553 kubelet[1932]: I0412 18:24:26.071320 1932 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-k8ppt\" (UniqueName: \"kubernetes.io/projected/2eaeea72-e073-4388-ad73-2cdbe45859b6-kube-api-access-k8ppt\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071749 kubelet[1932]: I0412 18:24:26.071329 1932 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071749 kubelet[1932]: I0412 18:24:26.071338 1932 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2eaeea72-e073-4388-ad73-2cdbe45859b6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.071749 kubelet[1932]: I0412 18:24:26.071348 1932 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2eaeea72-e073-4388-ad73-2cdbe45859b6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:26.117201 systemd[1]: Removed slice kubepods-besteffort-podb8102758_b16d_42c9_afd8_1750f32c78fa.slice. Apr 12 18:24:26.643148 kubelet[1932]: I0412 18:24:26.643116 1932 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b8102758-b16d-42c9-afd8-1750f32c78fa" path="/var/lib/kubelet/pods/b8102758-b16d-42c9-afd8-1750f32c78fa/volumes" Apr 12 18:24:26.647125 systemd[1]: Removed slice kubepods-burstable-pod2eaeea72_e073_4388_ad73_2cdbe45859b6.slice. Apr 12 18:24:26.647203 systemd[1]: kubepods-burstable-pod2eaeea72_e073_4388_ad73_2cdbe45859b6.slice: Consumed 6.700s CPU time. Apr 12 18:24:26.665521 systemd[1]: var-lib-kubelet-pods-b8102758\x2db16d\x2d42c9\x2dafd8\x2d1750f32c78fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dldt7c.mount: Deactivated successfully. Apr 12 18:24:26.665617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d-rootfs.mount: Deactivated successfully. Apr 12 18:24:26.665689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8367d4d08d7daeeaff41d49047821a5b52a97a948381f11e7ee914d8107ca5d-shm.mount: Deactivated successfully. Apr 12 18:24:26.665752 systemd[1]: var-lib-kubelet-pods-2eaeea72\x2de073\x2d4388\x2dad73\x2d2cdbe45859b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk8ppt.mount: Deactivated successfully. Apr 12 18:24:26.665803 systemd[1]: var-lib-kubelet-pods-2eaeea72\x2de073\x2d4388\x2dad73\x2d2cdbe45859b6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:24:26.665856 systemd[1]: var-lib-kubelet-pods-2eaeea72\x2de073\x2d4388\x2dad73\x2d2cdbe45859b6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:24:26.821213 kubelet[1932]: I0412 18:24:26.821184 1932 scope.go:117] "RemoveContainer" containerID="49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba" Apr 12 18:24:26.822944 env[1106]: time="2024-04-12T18:24:26.822910713Z" level=info msg="RemoveContainer for \"49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba\"" Apr 12 18:24:26.825919 env[1106]: time="2024-04-12T18:24:26.825883050Z" level=info msg="RemoveContainer for \"49231ab6314d53d82f34ef54bd2d0e22307e48f081d09d5c9e6c80895038daba\" returns successfully" Apr 12 18:24:26.826680 kubelet[1932]: I0412 18:24:26.826653 1932 scope.go:117] "RemoveContainer" containerID="560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f" Apr 12 18:24:26.827580 env[1106]: time="2024-04-12T18:24:26.827549780Z" level=info msg="RemoveContainer for \"560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f\"" Apr 12 18:24:26.829992 env[1106]: time="2024-04-12T18:24:26.829958754Z" level=info msg="RemoveContainer for \"560ccb4b507ae3a4a7a36bc7aa5a489e4ac2ddea9aa28a1476af3906acead77f\" returns successfully" Apr 12 18:24:26.830212 kubelet[1932]: I0412 18:24:26.830196 1932 scope.go:117] "RemoveContainer" containerID="1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01" Apr 12 18:24:26.831372 env[1106]: time="2024-04-12T18:24:26.831344242Z" level=info msg="RemoveContainer for \"1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01\"" Apr 12 18:24:26.833929 env[1106]: time="2024-04-12T18:24:26.833871617Z" level=info msg="RemoveContainer for \"1530c6dc562c61bcc22ee590c257c52019c1dc5fead937c7888aa300b5862f01\" returns successfully" Apr 12 18:24:26.834136 kubelet[1932]: I0412 18:24:26.834089 1932 scope.go:117] "RemoveContainer" containerID="a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b" Apr 12 18:24:26.835321 env[1106]: time="2024-04-12T18:24:26.835289385Z" level=info msg="RemoveContainer for \"a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b\"" Apr 12 18:24:26.837889 env[1106]: time="2024-04-12T18:24:26.837854600Z" level=info msg="RemoveContainer for \"a6c92228acc542214a3d1b7aa70769dcc174a83ce891705cf13bee32482e0b0b\" returns successfully" Apr 12 18:24:26.838192 kubelet[1932]: I0412 18:24:26.838164 1932 scope.go:117] "RemoveContainer" containerID="a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7" Apr 12 18:24:26.839442 env[1106]: time="2024-04-12T18:24:26.839405409Z" level=info msg="RemoveContainer for \"a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7\"" Apr 12 18:24:26.841851 env[1106]: time="2024-04-12T18:24:26.841820663Z" level=info msg="RemoveContainer for \"a047b98b6e32f499cbea576614f5bd06b464718672b0e048b221064a5f7451f7\" returns successfully" Apr 12 18:24:27.625063 sshd[3532]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:27.628587 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:46758.service: Deactivated successfully. Apr 12 18:24:27.629191 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:24:27.629341 systemd[1]: session-22.scope: Consumed 1.902s CPU time. Apr 12 18:24:27.629820 systemd-logind[1091]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:24:27.630872 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:46774.service. Apr 12 18:24:27.631669 systemd-logind[1091]: Removed session 22. Apr 12 18:24:27.670599 sshd[3701]: Accepted publickey for core from 10.0.0.1 port 46774 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:27.671939 sshd[3701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:27.676213 systemd-logind[1091]: New session 23 of user core. Apr 12 18:24:27.676890 systemd[1]: Started session-23.scope. Apr 12 18:24:28.642517 kubelet[1932]: I0412 18:24:28.642485 1932 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2eaeea72-e073-4388-ad73-2cdbe45859b6" path="/var/lib/kubelet/pods/2eaeea72-e073-4388-ad73-2cdbe45859b6/volumes" Apr 12 18:24:28.716348 kubelet[1932]: E0412 18:24:28.716326 1932 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:24:29.077823 sshd[3701]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:29.081398 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:46786.service. Apr 12 18:24:29.083692 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:46774.service: Deactivated successfully. Apr 12 18:24:29.084308 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:24:29.084447 systemd[1]: session-23.scope: Consumed 1.302s CPU time. Apr 12 18:24:29.085798 systemd-logind[1091]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:24:29.090178 systemd-logind[1091]: Removed session 23. Apr 12 18:24:29.096935 kubelet[1932]: I0412 18:24:29.096901 1932 topology_manager.go:215] "Topology Admit Handler" podUID="e59d5440-4b45-4d47-a02b-a63b43e4f229" podNamespace="kube-system" podName="cilium-bgkdh" Apr 12 18:24:29.097037 kubelet[1932]: E0412 18:24:29.096956 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2eaeea72-e073-4388-ad73-2cdbe45859b6" containerName="apply-sysctl-overwrites" Apr 12 18:24:29.097037 kubelet[1932]: E0412 18:24:29.096967 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2eaeea72-e073-4388-ad73-2cdbe45859b6" containerName="mount-bpf-fs" Apr 12 18:24:29.097037 kubelet[1932]: E0412 18:24:29.096975 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2eaeea72-e073-4388-ad73-2cdbe45859b6" containerName="clean-cilium-state" Apr 12 18:24:29.097037 kubelet[1932]: E0412 18:24:29.096982 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2eaeea72-e073-4388-ad73-2cdbe45859b6" containerName="cilium-agent" Apr 12 18:24:29.097037 kubelet[1932]: E0412 18:24:29.096989 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2eaeea72-e073-4388-ad73-2cdbe45859b6" containerName="mount-cgroup" Apr 12 18:24:29.097037 kubelet[1932]: E0412 18:24:29.096995 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8102758-b16d-42c9-afd8-1750f32c78fa" containerName="cilium-operator" Apr 12 18:24:29.097037 kubelet[1932]: I0412 18:24:29.097022 1932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8102758-b16d-42c9-afd8-1750f32c78fa" containerName="cilium-operator" Apr 12 18:24:29.097037 kubelet[1932]: I0412 18:24:29.097028 1932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eaeea72-e073-4388-ad73-2cdbe45859b6" containerName="cilium-agent" Apr 12 18:24:29.102602 systemd[1]: Created slice kubepods-burstable-pode59d5440_4b45_4d47_a02b_a63b43e4f229.slice. Apr 12 18:24:29.130364 sshd[3712]: Accepted publickey for core from 10.0.0.1 port 46786 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:29.131586 sshd[3712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:29.135005 systemd-logind[1091]: New session 24 of user core. Apr 12 18:24:29.135846 systemd[1]: Started session-24.scope. Apr 12 18:24:29.190771 kubelet[1932]: I0412 18:24:29.190730 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-cgroup\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.190972 kubelet[1932]: I0412 18:24:29.190959 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-lib-modules\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191103 kubelet[1932]: I0412 18:24:29.191090 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-etc-cni-netd\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191260 kubelet[1932]: I0412 18:24:29.191216 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-config-path\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191311 kubelet[1932]: I0412 18:24:29.191275 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-bpf-maps\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191311 kubelet[1932]: I0412 18:24:29.191307 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e59d5440-4b45-4d47-a02b-a63b43e4f229-clustermesh-secrets\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191364 kubelet[1932]: I0412 18:24:29.191330 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-run\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191364 kubelet[1932]: I0412 18:24:29.191351 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-host-proc-sys-net\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191413 kubelet[1932]: I0412 18:24:29.191370 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkgbn\" (UniqueName: \"kubernetes.io/projected/e59d5440-4b45-4d47-a02b-a63b43e4f229-kube-api-access-nkgbn\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191413 kubelet[1932]: I0412 18:24:29.191401 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-xtables-lock\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191457 kubelet[1932]: I0412 18:24:29.191420 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e59d5440-4b45-4d47-a02b-a63b43e4f229-hubble-tls\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191457 kubelet[1932]: I0412 18:24:29.191440 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-hostproc\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191517 kubelet[1932]: I0412 18:24:29.191465 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-ipsec-secrets\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191517 kubelet[1932]: I0412 18:24:29.191492 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cni-path\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.191517 kubelet[1932]: I0412 18:24:29.191512 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-host-proc-sys-kernel\") pod \"cilium-bgkdh\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " pod="kube-system/cilium-bgkdh" Apr 12 18:24:29.257905 sshd[3712]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:29.261187 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:46786.service: Deactivated successfully. Apr 12 18:24:29.261898 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:24:29.262574 systemd-logind[1091]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:24:29.264017 systemd[1]: Started sshd@24-10.0.0.54:22-10.0.0.1:34154.service. Apr 12 18:24:29.265245 systemd-logind[1091]: Removed session 24. Apr 12 18:24:29.268995 kubelet[1932]: E0412 18:24:29.268967 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-nkgbn lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-bgkdh" podUID="e59d5440-4b45-4d47-a02b-a63b43e4f229" Apr 12 18:24:29.310802 sshd[3726]: Accepted publickey for core from 10.0.0.1 port 34154 ssh2: RSA SHA256:QUhY8l8fo09wOQgBdU1SXiqM8N1XKRTa5W0hOYR625c Apr 12 18:24:29.312075 sshd[3726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:29.315335 systemd-logind[1091]: New session 25 of user core. Apr 12 18:24:29.316282 systemd[1]: Started session-25.scope. Apr 12 18:24:29.895068 kubelet[1932]: I0412 18:24:29.895023 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-lib-modules\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895068 kubelet[1932]: I0412 18:24:29.895067 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-bpf-maps\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895426 kubelet[1932]: I0412 18:24:29.895087 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-run\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895426 kubelet[1932]: I0412 18:24:29.895107 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-host-proc-sys-net\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895426 kubelet[1932]: I0412 18:24:29.895127 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-hostproc\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895426 kubelet[1932]: I0412 18:24:29.895146 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-cgroup\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895426 kubelet[1932]: I0412 18:24:29.895163 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-xtables-lock\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895426 kubelet[1932]: I0412 18:24:29.895189 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-ipsec-secrets\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895595 kubelet[1932]: I0412 18:24:29.895207 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cni-path\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895595 kubelet[1932]: I0412 18:24:29.895228 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e59d5440-4b45-4d47-a02b-a63b43e4f229-hubble-tls\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895595 kubelet[1932]: I0412 18:24:29.895246 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-etc-cni-netd\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895595 kubelet[1932]: I0412 18:24:29.895264 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e59d5440-4b45-4d47-a02b-a63b43e4f229-clustermesh-secrets\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895595 kubelet[1932]: I0412 18:24:29.895284 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkgbn\" (UniqueName: \"kubernetes.io/projected/e59d5440-4b45-4d47-a02b-a63b43e4f229-kube-api-access-nkgbn\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895595 kubelet[1932]: I0412 18:24:29.895303 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-host-proc-sys-kernel\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895758 kubelet[1932]: I0412 18:24:29.895323 1932 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-config-path\") pod \"e59d5440-4b45-4d47-a02b-a63b43e4f229\" (UID: \"e59d5440-4b45-4d47-a02b-a63b43e4f229\") " Apr 12 18:24:29.895758 kubelet[1932]: I0412 18:24:29.895507 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:29.895758 kubelet[1932]: I0412 18:24:29.895552 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:29.895758 kubelet[1932]: I0412 18:24:29.895569 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:29.895758 kubelet[1932]: I0412 18:24:29.895587 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:29.895874 kubelet[1932]: I0412 18:24:29.895603 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:29.895874 kubelet[1932]: I0412 18:24:29.895618 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-hostproc" (OuterVolumeSpecName: "hostproc") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:29.895874 kubelet[1932]: I0412 18:24:29.895654 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:29.895874 kubelet[1932]: I0412 18:24:29.895674 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:29.896552 kubelet[1932]: I0412 18:24:29.896055 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cni-path" (OuterVolumeSpecName: "cni-path") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:29.897029 kubelet[1932]: I0412 18:24:29.896983 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:24:29.897073 kubelet[1932]: I0412 18:24:29.897033 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:29.898816 kubelet[1932]: I0412 18:24:29.898772 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59d5440-4b45-4d47-a02b-a63b43e4f229-kube-api-access-nkgbn" (OuterVolumeSpecName: "kube-api-access-nkgbn") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "kube-api-access-nkgbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:24:29.900415 kubelet[1932]: I0412 18:24:29.900109 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59d5440-4b45-4d47-a02b-a63b43e4f229-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:24:29.900415 kubelet[1932]: I0412 18:24:29.900283 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:24:29.900254 systemd[1]: var-lib-kubelet-pods-e59d5440\x2d4b45\x2d4d47\x2da02b\x2da63b43e4f229-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnkgbn.mount: Deactivated successfully. Apr 12 18:24:29.900355 systemd[1]: var-lib-kubelet-pods-e59d5440\x2d4b45\x2d4d47\x2da02b\x2da63b43e4f229-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:24:29.901104 kubelet[1932]: I0412 18:24:29.901064 1932 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59d5440-4b45-4d47-a02b-a63b43e4f229-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e59d5440-4b45-4d47-a02b-a63b43e4f229" (UID: "e59d5440-4b45-4d47-a02b-a63b43e4f229"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:24:29.995738 kubelet[1932]: I0412 18:24:29.995683 1932 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e59d5440-4b45-4d47-a02b-a63b43e4f229-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.995738 kubelet[1932]: I0412 18:24:29.995733 1932 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.995875 kubelet[1932]: I0412 18:24:29.995758 1932 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nkgbn\" (UniqueName: \"kubernetes.io/projected/e59d5440-4b45-4d47-a02b-a63b43e4f229-kube-api-access-nkgbn\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.995875 kubelet[1932]: I0412 18:24:29.995779 1932 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.995875 kubelet[1932]: I0412 18:24:29.995796 1932 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.995875 kubelet[1932]: I0412 18:24:29.995807 1932 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.995875 kubelet[1932]: I0412 18:24:29.995818 1932 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.995875 kubelet[1932]: I0412 18:24:29.995831 1932 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.995875 kubelet[1932]: I0412 18:24:29.995840 1932 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.995875 kubelet[1932]: I0412 18:24:29.995849 1932 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.996066 kubelet[1932]: I0412 18:24:29.995859 1932 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.996066 kubelet[1932]: I0412 18:24:29.995867 1932 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.996066 kubelet[1932]: I0412 18:24:29.995877 1932 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e59d5440-4b45-4d47-a02b-a63b43e4f229-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.996066 kubelet[1932]: I0412 18:24:29.995887 1932 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e59d5440-4b45-4d47-a02b-a63b43e4f229-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:29.996066 kubelet[1932]: I0412 18:24:29.995899 1932 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e59d5440-4b45-4d47-a02b-a63b43e4f229-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:24:30.106960 kubelet[1932]: I0412 18:24:30.106926 1932 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-12T18:24:30Z","lastTransitionTime":"2024-04-12T18:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 12 18:24:30.296428 systemd[1]: var-lib-kubelet-pods-e59d5440\x2d4b45\x2d4d47\x2da02b\x2da63b43e4f229-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:24:30.296540 systemd[1]: var-lib-kubelet-pods-e59d5440\x2d4b45\x2d4d47\x2da02b\x2da63b43e4f229-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:24:30.641065 kubelet[1932]: E0412 18:24:30.641019 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:30.645897 systemd[1]: Removed slice kubepods-burstable-pode59d5440_4b45_4d47_a02b_a63b43e4f229.slice. Apr 12 18:24:30.858466 kubelet[1932]: I0412 18:24:30.858420 1932 topology_manager.go:215] "Topology Admit Handler" podUID="a57a1aa8-e20e-4010-ac23-ed3e83169597" podNamespace="kube-system" podName="cilium-vv8x2" Apr 12 18:24:30.865678 systemd[1]: Created slice kubepods-burstable-poda57a1aa8_e20e_4010_ac23_ed3e83169597.slice. Apr 12 18:24:30.902326 kubelet[1932]: I0412 18:24:30.902214 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a57a1aa8-e20e-4010-ac23-ed3e83169597-cilium-ipsec-secrets\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.902778 kubelet[1932]: I0412 18:24:30.902759 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a57a1aa8-e20e-4010-ac23-ed3e83169597-hubble-tls\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.902889 kubelet[1932]: I0412 18:24:30.902877 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a57a1aa8-e20e-4010-ac23-ed3e83169597-bpf-maps\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903019 kubelet[1932]: I0412 18:24:30.902994 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a57a1aa8-e20e-4010-ac23-ed3e83169597-lib-modules\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903056 kubelet[1932]: I0412 18:24:30.903047 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a57a1aa8-e20e-4010-ac23-ed3e83169597-xtables-lock\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903156 kubelet[1932]: I0412 18:24:30.903132 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a57a1aa8-e20e-4010-ac23-ed3e83169597-cilium-cgroup\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903198 kubelet[1932]: I0412 18:24:30.903173 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a57a1aa8-e20e-4010-ac23-ed3e83169597-cilium-config-path\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903198 kubelet[1932]: I0412 18:24:30.903195 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a57a1aa8-e20e-4010-ac23-ed3e83169597-etc-cni-netd\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903251 kubelet[1932]: I0412 18:24:30.903216 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a57a1aa8-e20e-4010-ac23-ed3e83169597-cilium-run\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903251 kubelet[1932]: I0412 18:24:30.903236 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a57a1aa8-e20e-4010-ac23-ed3e83169597-hostproc\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903300 kubelet[1932]: I0412 18:24:30.903256 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a57a1aa8-e20e-4010-ac23-ed3e83169597-clustermesh-secrets\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903300 kubelet[1932]: I0412 18:24:30.903275 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a57a1aa8-e20e-4010-ac23-ed3e83169597-host-proc-sys-net\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903344 kubelet[1932]: I0412 18:24:30.903294 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6zkp\" (UniqueName: \"kubernetes.io/projected/a57a1aa8-e20e-4010-ac23-ed3e83169597-kube-api-access-v6zkp\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903344 kubelet[1932]: I0412 18:24:30.903322 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a57a1aa8-e20e-4010-ac23-ed3e83169597-cni-path\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:30.903344 kubelet[1932]: I0412 18:24:30.903343 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a57a1aa8-e20e-4010-ac23-ed3e83169597-host-proc-sys-kernel\") pod \"cilium-vv8x2\" (UID: \"a57a1aa8-e20e-4010-ac23-ed3e83169597\") " pod="kube-system/cilium-vv8x2" Apr 12 18:24:31.167994 kubelet[1932]: E0412 18:24:31.167877 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:31.169269 env[1106]: time="2024-04-12T18:24:31.169169396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vv8x2,Uid:a57a1aa8-e20e-4010-ac23-ed3e83169597,Namespace:kube-system,Attempt:0,}" Apr 12 18:24:31.185050 env[1106]: time="2024-04-12T18:24:31.184967562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:24:31.185050 env[1106]: time="2024-04-12T18:24:31.185009202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:24:31.185050 env[1106]: time="2024-04-12T18:24:31.185019762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:24:31.185344 env[1106]: time="2024-04-12T18:24:31.185312244Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8 pid=3755 runtime=io.containerd.runc.v2 Apr 12 18:24:31.196411 systemd[1]: Started cri-containerd-6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8.scope. Apr 12 18:24:31.228945 env[1106]: time="2024-04-12T18:24:31.228887961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vv8x2,Uid:a57a1aa8-e20e-4010-ac23-ed3e83169597,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8\"" Apr 12 18:24:31.230959 kubelet[1932]: E0412 18:24:31.229559 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:31.231882 env[1106]: time="2024-04-12T18:24:31.231815937Z" level=info msg="CreateContainer within sandbox \"6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:24:31.241287 env[1106]: time="2024-04-12T18:24:31.241245508Z" level=info msg="CreateContainer within sandbox \"6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b00b847480a117c52eff72b136ef379ff9fdc9c52d76bc7a5dd1af25596eabe1\"" Apr 12 18:24:31.242532 env[1106]: time="2024-04-12T18:24:31.241744431Z" level=info msg="StartContainer for \"b00b847480a117c52eff72b136ef379ff9fdc9c52d76bc7a5dd1af25596eabe1\"" Apr 12 18:24:31.255568 systemd[1]: Started cri-containerd-b00b847480a117c52eff72b136ef379ff9fdc9c52d76bc7a5dd1af25596eabe1.scope. Apr 12 18:24:31.289435 env[1106]: time="2024-04-12T18:24:31.289394771Z" level=info msg="StartContainer for \"b00b847480a117c52eff72b136ef379ff9fdc9c52d76bc7a5dd1af25596eabe1\" returns successfully" Apr 12 18:24:31.300150 systemd[1]: cri-containerd-b00b847480a117c52eff72b136ef379ff9fdc9c52d76bc7a5dd1af25596eabe1.scope: Deactivated successfully. Apr 12 18:24:31.317500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b00b847480a117c52eff72b136ef379ff9fdc9c52d76bc7a5dd1af25596eabe1-rootfs.mount: Deactivated successfully. Apr 12 18:24:31.328842 env[1106]: time="2024-04-12T18:24:31.328797105Z" level=info msg="shim disconnected" id=b00b847480a117c52eff72b136ef379ff9fdc9c52d76bc7a5dd1af25596eabe1 Apr 12 18:24:31.329033 env[1106]: time="2024-04-12T18:24:31.329014587Z" level=warning msg="cleaning up after shim disconnected" id=b00b847480a117c52eff72b136ef379ff9fdc9c52d76bc7a5dd1af25596eabe1 namespace=k8s.io Apr 12 18:24:31.329092 env[1106]: time="2024-04-12T18:24:31.329079667Z" level=info msg="cleaning up dead shim" Apr 12 18:24:31.336114 env[1106]: time="2024-04-12T18:24:31.336081025Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3841 runtime=io.containerd.runc.v2\n" Apr 12 18:24:31.829832 kubelet[1932]: E0412 18:24:31.829803 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:31.831818 env[1106]: time="2024-04-12T18:24:31.831780286Z" level=info msg="CreateContainer within sandbox \"6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:24:31.842650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649443287.mount: Deactivated successfully. Apr 12 18:24:31.844469 env[1106]: time="2024-04-12T18:24:31.844431355Z" level=info msg="CreateContainer within sandbox \"6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ce8293ff278a94695e0792fd21858383ab19673869433564e68482083ce565b4\"" Apr 12 18:24:31.845193 env[1106]: time="2024-04-12T18:24:31.845145799Z" level=info msg="StartContainer for \"ce8293ff278a94695e0792fd21858383ab19673869433564e68482083ce565b4\"" Apr 12 18:24:31.859687 systemd[1]: Started cri-containerd-ce8293ff278a94695e0792fd21858383ab19673869433564e68482083ce565b4.scope. Apr 12 18:24:31.887680 env[1106]: time="2024-04-12T18:24:31.887574870Z" level=info msg="StartContainer for \"ce8293ff278a94695e0792fd21858383ab19673869433564e68482083ce565b4\" returns successfully" Apr 12 18:24:31.895311 systemd[1]: cri-containerd-ce8293ff278a94695e0792fd21858383ab19673869433564e68482083ce565b4.scope: Deactivated successfully. Apr 12 18:24:31.914323 env[1106]: time="2024-04-12T18:24:31.914277975Z" level=info msg="shim disconnected" id=ce8293ff278a94695e0792fd21858383ab19673869433564e68482083ce565b4 Apr 12 18:24:31.914563 env[1106]: time="2024-04-12T18:24:31.914541097Z" level=warning msg="cleaning up after shim disconnected" id=ce8293ff278a94695e0792fd21858383ab19673869433564e68482083ce565b4 namespace=k8s.io Apr 12 18:24:31.914711 env[1106]: time="2024-04-12T18:24:31.914615417Z" level=info msg="cleaning up dead shim" Apr 12 18:24:31.920688 env[1106]: time="2024-04-12T18:24:31.920650330Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3903 runtime=io.containerd.runc.v2\n" Apr 12 18:24:32.296591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce8293ff278a94695e0792fd21858383ab19673869433564e68482083ce565b4-rootfs.mount: Deactivated successfully. Apr 12 18:24:32.643237 kubelet[1932]: I0412 18:24:32.643189 1932 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e59d5440-4b45-4d47-a02b-a63b43e4f229" path="/var/lib/kubelet/pods/e59d5440-4b45-4d47-a02b-a63b43e4f229/volumes" Apr 12 18:24:32.833339 kubelet[1932]: E0412 18:24:32.833310 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:32.835874 env[1106]: time="2024-04-12T18:24:32.835832819Z" level=info msg="CreateContainer within sandbox \"6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:24:32.846414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount436062689.mount: Deactivated successfully. Apr 12 18:24:32.851289 env[1106]: time="2024-04-12T18:24:32.851249302Z" level=info msg="CreateContainer within sandbox \"6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4189c35fcf7bed6b64da36057b2f84e4a2b2e215a259d5f191dc9bef20418054\"" Apr 12 18:24:32.852122 env[1106]: time="2024-04-12T18:24:32.852091546Z" level=info msg="StartContainer for \"4189c35fcf7bed6b64da36057b2f84e4a2b2e215a259d5f191dc9bef20418054\"" Apr 12 18:24:32.870882 systemd[1]: Started cri-containerd-4189c35fcf7bed6b64da36057b2f84e4a2b2e215a259d5f191dc9bef20418054.scope. Apr 12 18:24:32.902798 env[1106]: time="2024-04-12T18:24:32.902704179Z" level=info msg="StartContainer for \"4189c35fcf7bed6b64da36057b2f84e4a2b2e215a259d5f191dc9bef20418054\" returns successfully" Apr 12 18:24:32.907170 systemd[1]: cri-containerd-4189c35fcf7bed6b64da36057b2f84e4a2b2e215a259d5f191dc9bef20418054.scope: Deactivated successfully. Apr 12 18:24:32.927765 env[1106]: time="2024-04-12T18:24:32.927716273Z" level=info msg="shim disconnected" id=4189c35fcf7bed6b64da36057b2f84e4a2b2e215a259d5f191dc9bef20418054 Apr 12 18:24:32.927999 env[1106]: time="2024-04-12T18:24:32.927979595Z" level=warning msg="cleaning up after shim disconnected" id=4189c35fcf7bed6b64da36057b2f84e4a2b2e215a259d5f191dc9bef20418054 namespace=k8s.io Apr 12 18:24:32.928078 env[1106]: time="2024-04-12T18:24:32.928064395Z" level=info msg="cleaning up dead shim" Apr 12 18:24:32.934776 env[1106]: time="2024-04-12T18:24:32.934736351Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3960 runtime=io.containerd.runc.v2\n" Apr 12 18:24:33.296624 systemd[1]: run-containerd-runc-k8s.io-4189c35fcf7bed6b64da36057b2f84e4a2b2e215a259d5f191dc9bef20418054-runc.8SHDDP.mount: Deactivated successfully. Apr 12 18:24:33.296738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4189c35fcf7bed6b64da36057b2f84e4a2b2e215a259d5f191dc9bef20418054-rootfs.mount: Deactivated successfully. Apr 12 18:24:33.717597 kubelet[1932]: E0412 18:24:33.717564 1932 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:24:33.837187 kubelet[1932]: E0412 18:24:33.837155 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:33.839737 env[1106]: time="2024-04-12T18:24:33.839691003Z" level=info msg="CreateContainer within sandbox \"6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:24:33.856338 env[1106]: time="2024-04-12T18:24:33.856279411Z" level=info msg="CreateContainer within sandbox \"6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"275eb278c167f21a01328fce3a7fa0f4650b7a23511b1e56ba71ea4a367ee523\"" Apr 12 18:24:33.857111 env[1106]: time="2024-04-12T18:24:33.857068976Z" level=info msg="StartContainer for \"275eb278c167f21a01328fce3a7fa0f4650b7a23511b1e56ba71ea4a367ee523\"" Apr 12 18:24:33.870797 systemd[1]: Started cri-containerd-275eb278c167f21a01328fce3a7fa0f4650b7a23511b1e56ba71ea4a367ee523.scope. Apr 12 18:24:33.903476 systemd[1]: cri-containerd-275eb278c167f21a01328fce3a7fa0f4650b7a23511b1e56ba71ea4a367ee523.scope: Deactivated successfully. Apr 12 18:24:33.905376 env[1106]: time="2024-04-12T18:24:33.905338872Z" level=info msg="StartContainer for \"275eb278c167f21a01328fce3a7fa0f4650b7a23511b1e56ba71ea4a367ee523\" returns successfully" Apr 12 18:24:33.923151 env[1106]: time="2024-04-12T18:24:33.923104166Z" level=info msg="shim disconnected" id=275eb278c167f21a01328fce3a7fa0f4650b7a23511b1e56ba71ea4a367ee523 Apr 12 18:24:33.923151 env[1106]: time="2024-04-12T18:24:33.923147287Z" level=warning msg="cleaning up after shim disconnected" id=275eb278c167f21a01328fce3a7fa0f4650b7a23511b1e56ba71ea4a367ee523 namespace=k8s.io Apr 12 18:24:33.923151 env[1106]: time="2024-04-12T18:24:33.923156727Z" level=info msg="cleaning up dead shim" Apr 12 18:24:33.929243 env[1106]: time="2024-04-12T18:24:33.929209239Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4018 runtime=io.containerd.runc.v2\n" Apr 12 18:24:34.296826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-275eb278c167f21a01328fce3a7fa0f4650b7a23511b1e56ba71ea4a367ee523-rootfs.mount: Deactivated successfully. Apr 12 18:24:34.841952 kubelet[1932]: E0412 18:24:34.841907 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:34.844572 env[1106]: time="2024-04-12T18:24:34.844527327Z" level=info msg="CreateContainer within sandbox \"6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:24:34.857964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4002392814.mount: Deactivated successfully. Apr 12 18:24:34.880223 env[1106]: time="2024-04-12T18:24:34.880148714Z" level=info msg="CreateContainer within sandbox \"6cf215b582d0d895a4318be6fee6ddb30307643d7962000a9f5e0974c1cc0cb8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5820128dcdce9a4736d5aa1bc64a20f401e5253ee20526faee04b24e38d463ed\"" Apr 12 18:24:34.881850 env[1106]: time="2024-04-12T18:24:34.881020959Z" level=info msg="StartContainer for \"5820128dcdce9a4736d5aa1bc64a20f401e5253ee20526faee04b24e38d463ed\"" Apr 12 18:24:34.898839 systemd[1]: Started cri-containerd-5820128dcdce9a4736d5aa1bc64a20f401e5253ee20526faee04b24e38d463ed.scope. Apr 12 18:24:34.935020 env[1106]: time="2024-04-12T18:24:34.934964202Z" level=info msg="StartContainer for \"5820128dcdce9a4736d5aa1bc64a20f401e5253ee20526faee04b24e38d463ed\" returns successfully" Apr 12 18:24:35.200714 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Apr 12 18:24:35.296794 systemd[1]: run-containerd-runc-k8s.io-5820128dcdce9a4736d5aa1bc64a20f401e5253ee20526faee04b24e38d463ed-runc.vS30Mp.mount: Deactivated successfully. Apr 12 18:24:35.846153 kubelet[1932]: E0412 18:24:35.846112 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:37.170144 kubelet[1932]: E0412 18:24:37.170108 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:37.852693 systemd-networkd[1001]: lxc_health: Link UP Apr 12 18:24:37.860022 systemd-networkd[1001]: lxc_health: Gained carrier Apr 12 18:24:37.860664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:24:39.170310 kubelet[1932]: E0412 18:24:39.170261 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:39.185073 kubelet[1932]: I0412 18:24:39.185019 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vv8x2" podStartSLOduration=9.184982805 podStartE2EDuration="9.184982805s" podCreationTimestamp="2024-04-12 18:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:24:35.860524966 +0000 UTC m=+87.320695181" watchObservedRunningTime="2024-04-12 18:24:39.184982805 +0000 UTC m=+90.645152980" Apr 12 18:24:39.371764 systemd-networkd[1001]: lxc_health: Gained IPv6LL Apr 12 18:24:39.704851 systemd[1]: run-containerd-runc-k8s.io-5820128dcdce9a4736d5aa1bc64a20f401e5253ee20526faee04b24e38d463ed-runc.DZU47y.mount: Deactivated successfully. Apr 12 18:24:39.852016 kubelet[1932]: E0412 18:24:39.851963 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:43.640847 kubelet[1932]: E0412 18:24:43.640807 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:43.641444 kubelet[1932]: E0412 18:24:43.641423 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:24:43.955695 systemd[1]: run-containerd-runc-k8s.io-5820128dcdce9a4736d5aa1bc64a20f401e5253ee20526faee04b24e38d463ed-runc.MqPbgW.mount: Deactivated successfully. Apr 12 18:24:44.007084 sshd[3726]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:44.009358 systemd[1]: sshd@24-10.0.0.54:22-10.0.0.1:34154.service: Deactivated successfully. Apr 12 18:24:44.010138 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 18:24:44.010671 systemd-logind[1091]: Session 25 logged out. Waiting for processes to exit. Apr 12 18:24:44.011372 systemd-logind[1091]: Removed session 25.