Feb 12 19:16:44.733350 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:16:44.733370 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:16:44.733377 kernel: efi: EFI v2.70 by EDK II Feb 12 19:16:44.733383 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 12 19:16:44.733388 kernel: random: crng init done Feb 12 19:16:44.733393 kernel: ACPI: Early table checksum verification disabled Feb 12 19:16:44.733400 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 12 19:16:44.733406 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:16:44.733412 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:16:44.733417 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:16:44.733423 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:16:44.733428 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:16:44.733434 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:16:44.733439 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:16:44.733447 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:16:44.733453 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:16:44.733459 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:16:44.733464 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 12 19:16:44.733470 kernel: NUMA: Failed to initialise from firmware Feb 12 19:16:44.733476 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:16:44.733482 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 12 19:16:44.733488 kernel: Zone ranges: Feb 12 19:16:44.733493 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:16:44.733500 kernel: DMA32 empty Feb 12 19:16:44.733506 kernel: Normal empty Feb 12 19:16:44.733511 kernel: Movable zone start for each node Feb 12 19:16:44.733517 kernel: Early memory node ranges Feb 12 19:16:44.733523 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 12 19:16:44.733537 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 12 19:16:44.733543 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 12 19:16:44.733549 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 12 19:16:44.733554 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 12 19:16:44.733560 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 12 19:16:44.733566 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 12 19:16:44.733572 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:16:44.733579 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 12 19:16:44.733585 kernel: psci: probing for conduit method from ACPI. Feb 12 19:16:44.733590 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:16:44.733596 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:16:44.733602 kernel: psci: Trusted OS migration not required Feb 12 19:16:44.733610 kernel: psci: SMC Calling Convention v1.1 Feb 12 19:16:44.733617 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 12 19:16:44.733624 kernel: ACPI: SRAT not present Feb 12 19:16:44.733630 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:16:44.733636 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:16:44.733642 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 12 19:16:44.733648 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:16:44.733655 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:16:44.733661 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:16:44.733667 kernel: CPU features: detected: Spectre-v4 Feb 12 19:16:44.733673 kernel: CPU features: detected: Spectre-BHB Feb 12 19:16:44.733680 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:16:44.733686 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:16:44.733692 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:16:44.733698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 12 19:16:44.733704 kernel: Policy zone: DMA Feb 12 19:16:44.733711 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:16:44.733718 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:16:44.733724 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:16:44.733730 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:16:44.733737 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:16:44.733743 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 12 19:16:44.733750 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:16:44.733756 kernel: trace event string verifier disabled Feb 12 19:16:44.733762 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:16:44.733769 kernel: rcu: RCU event tracing is enabled. Feb 12 19:16:44.733775 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:16:44.733782 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:16:44.733788 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:16:44.733794 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:16:44.733800 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:16:44.733806 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:16:44.733812 kernel: GICv3: 256 SPIs implemented Feb 12 19:16:44.733829 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:16:44.733835 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:16:44.733841 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:16:44.733847 kernel: GICv3: 16 PPIs implemented Feb 12 19:16:44.733853 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 12 19:16:44.733859 kernel: ACPI: SRAT not present Feb 12 19:16:44.733865 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 12 19:16:44.733871 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 19:16:44.733878 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 12 19:16:44.733884 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 12 19:16:44.733890 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 12 19:16:44.733896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:16:44.733904 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:16:44.733910 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:16:44.733917 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:16:44.733923 kernel: arm-pv: using stolen time PV Feb 12 19:16:44.733930 kernel: Console: colour dummy device 80x25 Feb 12 19:16:44.733936 kernel: ACPI: Core revision 20210730 Feb 12 19:16:44.733942 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:16:44.733949 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:16:44.733955 kernel: LSM: Security Framework initializing Feb 12 19:16:44.733961 kernel: SELinux: Initializing. Feb 12 19:16:44.733969 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:16:44.733975 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:16:44.733981 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:16:44.733987 kernel: Platform MSI: ITS@0x8080000 domain created Feb 12 19:16:44.733993 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 12 19:16:44.734000 kernel: Remapping and enabling EFI services. Feb 12 19:16:44.734006 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:16:44.734012 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:16:44.734018 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 12 19:16:44.734026 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 12 19:16:44.734032 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:16:44.734038 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:16:44.734044 kernel: Detected PIPT I-cache on CPU2 Feb 12 19:16:44.734051 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 12 19:16:44.734057 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 12 19:16:44.734063 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:16:44.734070 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 12 19:16:44.734076 kernel: Detected PIPT I-cache on CPU3 Feb 12 19:16:44.734083 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 12 19:16:44.734090 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 12 19:16:44.734096 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:16:44.734103 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 12 19:16:44.734109 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:16:44.734156 kernel: SMP: Total of 4 processors activated. Feb 12 19:16:44.734164 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:16:44.734172 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:16:44.734178 kernel: CPU features: detected: Common not Private translations Feb 12 19:16:44.734185 kernel: CPU features: detected: CRC32 instructions Feb 12 19:16:44.734192 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:16:44.734199 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:16:44.734205 kernel: CPU features: detected: Privileged Access Never Feb 12 19:16:44.734214 kernel: CPU features: detected: RAS Extension Support Feb 12 19:16:44.734220 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 12 19:16:44.734227 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:16:44.734234 kernel: alternatives: patching kernel code Feb 12 19:16:44.734241 kernel: devtmpfs: initialized Feb 12 19:16:44.734248 kernel: KASLR enabled Feb 12 19:16:44.734255 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:16:44.734262 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:16:44.734268 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:16:44.734275 kernel: SMBIOS 3.0.0 present. Feb 12 19:16:44.734282 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 12 19:16:44.734288 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:16:44.734295 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:16:44.734302 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:16:44.734310 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:16:44.734317 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:16:44.734323 kernel: audit: type=2000 audit(0.037:1): state=initialized audit_enabled=0 res=1 Feb 12 19:16:44.734330 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:16:44.734336 kernel: cpuidle: using governor menu Feb 12 19:16:44.734343 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:16:44.734350 kernel: ASID allocator initialised with 32768 entries Feb 12 19:16:44.734356 kernel: ACPI: bus type PCI registered Feb 12 19:16:44.734363 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:16:44.734371 kernel: Serial: AMBA PL011 UART driver Feb 12 19:16:44.734377 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:16:44.734384 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:16:44.734391 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:16:44.734397 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:16:44.734404 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:16:44.734411 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:16:44.734417 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:16:44.734424 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:16:44.734431 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:16:44.734438 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:16:44.734445 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:16:44.735118 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:16:44.735138 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:16:44.735145 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:16:44.735152 kernel: ACPI: Interpreter enabled Feb 12 19:16:44.735159 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:16:44.735166 kernel: ACPI: MCFG table detected, 1 entries Feb 12 19:16:44.735178 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:16:44.735185 kernel: printk: console [ttyAMA0] enabled Feb 12 19:16:44.735192 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:16:44.735326 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:16:44.735389 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 19:16:44.735446 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 19:16:44.735503 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 12 19:16:44.735581 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 12 19:16:44.735592 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 12 19:16:44.735599 kernel: PCI host bridge to bus 0000:00 Feb 12 19:16:44.735671 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 12 19:16:44.735727 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 19:16:44.735781 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 12 19:16:44.735852 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:16:44.735932 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 12 19:16:44.736004 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:16:44.736072 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 12 19:16:44.736132 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 12 19:16:44.736193 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:16:44.736254 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:16:44.736316 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 12 19:16:44.736379 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 12 19:16:44.736432 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 12 19:16:44.736484 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 19:16:44.736544 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 12 19:16:44.736554 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 19:16:44.736561 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 19:16:44.736568 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 19:16:44.736576 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 19:16:44.736583 kernel: iommu: Default domain type: Translated Feb 12 19:16:44.736589 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:16:44.736596 kernel: vgaarb: loaded Feb 12 19:16:44.736603 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:16:44.736610 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:16:44.736617 kernel: PTP clock support registered Feb 12 19:16:44.736623 kernel: Registered efivars operations Feb 12 19:16:44.736630 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:16:44.736638 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:16:44.736644 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:16:44.736651 kernel: pnp: PnP ACPI init Feb 12 19:16:44.736717 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 12 19:16:44.736726 kernel: pnp: PnP ACPI: found 1 devices Feb 12 19:16:44.736733 kernel: NET: Registered PF_INET protocol family Feb 12 19:16:44.736740 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:16:44.736747 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:16:44.736753 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:16:44.736762 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:16:44.736769 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:16:44.736775 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:16:44.736782 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:16:44.736789 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:16:44.736796 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:16:44.736803 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:16:44.736810 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 12 19:16:44.736827 kernel: kvm [1]: HYP mode not available Feb 12 19:16:44.736834 kernel: Initialise system trusted keyrings Feb 12 19:16:44.736843 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:16:44.736850 kernel: Key type asymmetric registered Feb 12 19:16:44.736857 kernel: Asymmetric key parser 'x509' registered Feb 12 19:16:44.736863 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:16:44.736870 kernel: io scheduler mq-deadline registered Feb 12 19:16:44.736877 kernel: io scheduler kyber registered Feb 12 19:16:44.736883 kernel: io scheduler bfq registered Feb 12 19:16:44.736890 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 19:16:44.736899 kernel: ACPI: button: Power Button [PWRB] Feb 12 19:16:44.736906 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 19:16:44.736978 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 12 19:16:44.736988 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:16:44.736994 kernel: thunder_xcv, ver 1.0 Feb 12 19:16:44.737001 kernel: thunder_bgx, ver 1.0 Feb 12 19:16:44.737008 kernel: nicpf, ver 1.0 Feb 12 19:16:44.737014 kernel: nicvf, ver 1.0 Feb 12 19:16:44.737089 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:16:44.737150 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:16:44 UTC (1707765404) Feb 12 19:16:44.737159 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:16:44.737166 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:16:44.737173 kernel: Segment Routing with IPv6 Feb 12 19:16:44.737180 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:16:44.737187 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:16:44.737194 kernel: Key type dns_resolver registered Feb 12 19:16:44.737201 kernel: registered taskstats version 1 Feb 12 19:16:44.737209 kernel: Loading compiled-in X.509 certificates Feb 12 19:16:44.737217 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:16:44.737223 kernel: Key type .fscrypt registered Feb 12 19:16:44.737230 kernel: Key type fscrypt-provisioning registered Feb 12 19:16:44.737237 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:16:44.737243 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:16:44.737250 kernel: ima: No architecture policies found Feb 12 19:16:44.737257 kernel: Freeing unused kernel memory: 34688K Feb 12 19:16:44.737265 kernel: Run /init as init process Feb 12 19:16:44.737272 kernel: with arguments: Feb 12 19:16:44.737278 kernel: /init Feb 12 19:16:44.737285 kernel: with environment: Feb 12 19:16:44.737291 kernel: HOME=/ Feb 12 19:16:44.737298 kernel: TERM=linux Feb 12 19:16:44.737304 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:16:44.737313 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:16:44.737322 systemd[1]: Detected virtualization kvm. Feb 12 19:16:44.737332 systemd[1]: Detected architecture arm64. Feb 12 19:16:44.737339 systemd[1]: Running in initrd. Feb 12 19:16:44.737346 systemd[1]: No hostname configured, using default hostname. Feb 12 19:16:44.737354 systemd[1]: Hostname set to . Feb 12 19:16:44.737361 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:16:44.737369 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:16:44.737376 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:16:44.737383 systemd[1]: Reached target cryptsetup.target. Feb 12 19:16:44.737392 systemd[1]: Reached target paths.target. Feb 12 19:16:44.737399 systemd[1]: Reached target slices.target. Feb 12 19:16:44.737407 systemd[1]: Reached target swap.target. Feb 12 19:16:44.737414 systemd[1]: Reached target timers.target. Feb 12 19:16:44.737422 systemd[1]: Listening on iscsid.socket. Feb 12 19:16:44.737430 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:16:44.737437 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:16:44.737446 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:16:44.737453 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:16:44.737461 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:16:44.737468 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:16:44.737475 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:16:44.737483 systemd[1]: Reached target sockets.target. Feb 12 19:16:44.737490 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:16:44.737497 systemd[1]: Finished network-cleanup.service. Feb 12 19:16:44.737505 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:16:44.737514 systemd[1]: Starting systemd-journald.service... Feb 12 19:16:44.737521 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:16:44.737537 systemd[1]: Starting systemd-resolved.service... Feb 12 19:16:44.737546 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:16:44.737553 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:16:44.737561 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:16:44.737568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:16:44.737576 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:16:44.737583 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:16:44.737593 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:16:44.737600 kernel: audit: type=1130 audit(1707765404.735:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.737611 systemd-journald[289]: Journal started Feb 12 19:16:44.737655 systemd-journald[289]: Runtime Journal (/run/log/journal/a5e2807acca749d192d00ccb20252880) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:16:44.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.725065 systemd-modules-load[290]: Inserted module 'overlay' Feb 12 19:16:44.739676 systemd[1]: Started systemd-journald.service. Feb 12 19:16:44.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.742844 kernel: audit: type=1130 audit(1707765404.740:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.742883 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:16:44.745712 systemd-modules-load[290]: Inserted module 'br_netfilter' Feb 12 19:16:44.746613 kernel: Bridge firewalling registered Feb 12 19:16:44.748864 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:16:44.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.755574 kernel: audit: type=1130 audit(1707765404.748:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.753019 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:16:44.756848 kernel: SCSI subsystem initialized Feb 12 19:16:44.757615 systemd-resolved[291]: Positive Trust Anchors: Feb 12 19:16:44.757627 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:16:44.757658 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:16:44.762666 systemd-resolved[291]: Defaulting to hostname 'linux'. Feb 12 19:16:44.769452 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:16:44.769475 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:16:44.769486 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:16:44.769501 dracut-cmdline[307]: dracut-dracut-053 Feb 12 19:16:44.769501 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:16:44.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.763534 systemd[1]: Started systemd-resolved.service. Feb 12 19:16:44.779887 kernel: audit: type=1130 audit(1707765404.767:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.779911 kernel: audit: type=1130 audit(1707765404.776:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.768799 systemd[1]: Reached target nss-lookup.target. Feb 12 19:16:44.770285 systemd-modules-load[290]: Inserted module 'dm_multipath' Feb 12 19:16:44.771018 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:16:44.777613 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:16:44.786029 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:16:44.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.789850 kernel: audit: type=1130 audit(1707765404.786:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.837840 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:16:44.845840 kernel: iscsi: registered transport (tcp) Feb 12 19:16:44.860839 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:16:44.860853 kernel: QLogic iSCSI HBA Driver Feb 12 19:16:44.894563 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:16:44.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.896055 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:16:44.898348 kernel: audit: type=1130 audit(1707765404.894:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:44.940844 kernel: raid6: neonx8 gen() 13790 MB/s Feb 12 19:16:44.957832 kernel: raid6: neonx8 xor() 10812 MB/s Feb 12 19:16:44.974832 kernel: raid6: neonx4 gen() 13533 MB/s Feb 12 19:16:44.991830 kernel: raid6: neonx4 xor() 11238 MB/s Feb 12 19:16:45.008828 kernel: raid6: neonx2 gen() 12919 MB/s Feb 12 19:16:45.025828 kernel: raid6: neonx2 xor() 10237 MB/s Feb 12 19:16:45.042828 kernel: raid6: neonx1 gen() 10492 MB/s Feb 12 19:16:45.059833 kernel: raid6: neonx1 xor() 8775 MB/s Feb 12 19:16:45.076830 kernel: raid6: int64x8 gen() 6294 MB/s Feb 12 19:16:45.093829 kernel: raid6: int64x8 xor() 3546 MB/s Feb 12 19:16:45.110829 kernel: raid6: int64x4 gen() 7214 MB/s Feb 12 19:16:45.127829 kernel: raid6: int64x4 xor() 3852 MB/s Feb 12 19:16:45.144829 kernel: raid6: int64x2 gen() 6146 MB/s Feb 12 19:16:45.161829 kernel: raid6: int64x2 xor() 3320 MB/s Feb 12 19:16:45.178830 kernel: raid6: int64x1 gen() 5039 MB/s Feb 12 19:16:45.196052 kernel: raid6: int64x1 xor() 2644 MB/s Feb 12 19:16:45.196071 kernel: raid6: using algorithm neonx8 gen() 13790 MB/s Feb 12 19:16:45.196079 kernel: raid6: .... xor() 10812 MB/s, rmw enabled Feb 12 19:16:45.196089 kernel: raid6: using neon recovery algorithm Feb 12 19:16:45.207090 kernel: xor: measuring software checksum speed Feb 12 19:16:45.207107 kernel: 8regs : 17297 MB/sec Feb 12 19:16:45.207921 kernel: 32regs : 20749 MB/sec Feb 12 19:16:45.209083 kernel: arm64_neon : 27968 MB/sec Feb 12 19:16:45.209094 kernel: xor: using function: arm64_neon (27968 MB/sec) Feb 12 19:16:45.262863 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:16:45.273384 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:16:45.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:45.276000 audit: BPF prog-id=7 op=LOAD Feb 12 19:16:45.276000 audit: BPF prog-id=8 op=LOAD Feb 12 19:16:45.277839 kernel: audit: type=1130 audit(1707765405.273:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:45.277863 kernel: audit: type=1334 audit(1707765405.276:10): prog-id=7 op=LOAD Feb 12 19:16:45.278114 systemd[1]: Starting systemd-udevd.service... Feb 12 19:16:45.292243 systemd-udevd[490]: Using default interface naming scheme 'v252'. Feb 12 19:16:45.295642 systemd[1]: Started systemd-udevd.service. Feb 12 19:16:45.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:45.299512 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:16:45.310828 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Feb 12 19:16:45.338755 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:16:45.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:45.340342 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:16:45.374893 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:16:45.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:45.403842 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:16:45.415905 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:16:45.415951 kernel: GPT:9289727 != 19775487 Feb 12 19:16:45.415961 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:16:45.416843 kernel: GPT:9289727 != 19775487 Feb 12 19:16:45.416875 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:16:45.416884 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:16:45.434914 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:16:45.437656 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:16:45.438667 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:16:45.443840 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (534) Feb 12 19:16:45.444750 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:16:45.450280 systemd[1]: Starting disk-uuid.service... Feb 12 19:16:45.453335 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:16:45.457661 disk-uuid[560]: Primary Header is updated. Feb 12 19:16:45.457661 disk-uuid[560]: Secondary Entries is updated. Feb 12 19:16:45.457661 disk-uuid[560]: Secondary Header is updated. Feb 12 19:16:45.460368 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:16:46.476410 disk-uuid[561]: The operation has completed successfully. Feb 12 19:16:46.477426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:16:46.506345 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:16:46.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.506435 systemd[1]: Finished disk-uuid.service. Feb 12 19:16:46.507982 systemd[1]: Starting verity-setup.service... Feb 12 19:16:46.525412 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:16:46.548266 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:16:46.550414 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:16:46.552169 systemd[1]: Finished verity-setup.service. Feb 12 19:16:46.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.598837 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:16:46.598871 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:16:46.599663 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:16:46.600410 systemd[1]: Starting ignition-setup.service... Feb 12 19:16:46.602405 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:16:46.610841 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:16:46.610890 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:16:46.610900 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:16:46.621376 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:16:46.628966 systemd[1]: Finished ignition-setup.service. Feb 12 19:16:46.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.630534 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:16:46.685306 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:16:46.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.686000 audit: BPF prog-id=9 op=LOAD Feb 12 19:16:46.687367 systemd[1]: Starting systemd-networkd.service... Feb 12 19:16:46.709290 systemd-networkd[738]: lo: Link UP Feb 12 19:16:46.709303 systemd-networkd[738]: lo: Gained carrier Feb 12 19:16:46.709672 systemd-networkd[738]: Enumeration completed Feb 12 19:16:46.709795 systemd[1]: Started systemd-networkd.service. Feb 12 19:16:46.709895 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:16:46.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.710912 systemd[1]: Reached target network.target. Feb 12 19:16:46.712466 systemd[1]: Starting iscsiuio.service... Feb 12 19:16:46.714215 systemd-networkd[738]: eth0: Link UP Feb 12 19:16:46.714219 systemd-networkd[738]: eth0: Gained carrier Feb 12 19:16:46.727209 ignition[653]: Ignition 2.14.0 Feb 12 19:16:46.727220 ignition[653]: Stage: fetch-offline Feb 12 19:16:46.727263 ignition[653]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:16:46.727271 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:16:46.729789 systemd[1]: Started iscsiuio.service. Feb 12 19:16:46.727447 ignition[653]: parsed url from cmdline: "" Feb 12 19:16:46.731566 systemd[1]: Starting iscsid.service... Feb 12 19:16:46.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.727454 ignition[653]: no config URL provided Feb 12 19:16:46.732289 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:16:46.727459 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:16:46.738960 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:16:46.738960 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 19:16:46.738960 iscsid[744]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:16:46.738960 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:16:46.738960 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:16:46.738960 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:16:46.738960 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:16:46.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.727472 ignition[653]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:16:46.739127 systemd[1]: Started iscsid.service. Feb 12 19:16:46.727492 ignition[653]: op(1): [started] loading QEMU firmware config module Feb 12 19:16:46.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.740934 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:16:46.727497 ignition[653]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:16:46.752046 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:16:46.734375 ignition[653]: op(1): [finished] loading QEMU firmware config module Feb 12 19:16:46.753153 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:16:46.754576 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:16:46.755956 systemd[1]: Reached target remote-fs.target. Feb 12 19:16:46.758086 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:16:46.767420 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:16:46.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.817669 ignition[653]: parsing config with SHA512: d0065079e13f7ff19f2574f93e4bd9ac177ba6942b286813c65b3cceac66b37f910e53f56fb409c7831962ee5a758efa127246838bdf7bce5045e0a5e1d7be85 Feb 12 19:16:46.860921 unknown[653]: fetched base config from "system" Feb 12 19:16:46.860938 unknown[653]: fetched user config from "qemu" Feb 12 19:16:46.861785 ignition[653]: fetch-offline: fetch-offline passed Feb 12 19:16:46.861885 ignition[653]: Ignition finished successfully Feb 12 19:16:46.863842 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:16:46.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.864698 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:16:46.865576 systemd[1]: Starting ignition-kargs.service... Feb 12 19:16:46.876483 ignition[759]: Ignition 2.14.0 Feb 12 19:16:46.876494 ignition[759]: Stage: kargs Feb 12 19:16:46.876612 ignition[759]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:16:46.876622 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:16:46.879236 systemd[1]: Finished ignition-kargs.service. Feb 12 19:16:46.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.877735 ignition[759]: kargs: kargs passed Feb 12 19:16:46.877782 ignition[759]: Ignition finished successfully Feb 12 19:16:46.881347 systemd[1]: Starting ignition-disks.service... Feb 12 19:16:46.891134 ignition[765]: Ignition 2.14.0 Feb 12 19:16:46.891145 ignition[765]: Stage: disks Feb 12 19:16:46.891266 ignition[765]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:16:46.891280 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:16:46.893721 ignition[765]: disks: disks passed Feb 12 19:16:46.893786 ignition[765]: Ignition finished successfully Feb 12 19:16:46.895711 systemd[1]: Finished ignition-disks.service. Feb 12 19:16:46.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.896591 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:16:46.897530 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:16:46.898652 systemd[1]: Reached target local-fs.target. Feb 12 19:16:46.899776 systemd[1]: Reached target sysinit.target. Feb 12 19:16:46.900767 systemd[1]: Reached target basic.target. Feb 12 19:16:46.902993 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:16:46.921307 systemd-fsck[773]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:16:46.925310 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:16:46.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:46.926965 systemd[1]: Mounting sysroot.mount... Feb 12 19:16:46.938843 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:16:46.938879 systemd[1]: Mounted sysroot.mount. Feb 12 19:16:46.939623 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:16:46.941575 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:16:46.942454 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:16:46.942494 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:16:46.942528 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:16:46.944593 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:16:46.946797 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:16:46.951718 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:16:46.957315 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:16:46.962201 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:16:46.967220 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:16:47.000868 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:16:47.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:47.003145 systemd[1]: Starting ignition-mount.service... Feb 12 19:16:47.005272 systemd[1]: Starting sysroot-boot.service... Feb 12 19:16:47.013728 bash[824]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:16:47.029653 ignition[826]: INFO : Ignition 2.14.0 Feb 12 19:16:47.029653 ignition[826]: INFO : Stage: mount Feb 12 19:16:47.029653 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:16:47.029653 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:16:47.029653 ignition[826]: INFO : mount: mount passed Feb 12 19:16:47.029653 ignition[826]: INFO : Ignition finished successfully Feb 12 19:16:47.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:47.030478 systemd[1]: Finished ignition-mount.service. Feb 12 19:16:47.042444 systemd[1]: Finished sysroot-boot.service. Feb 12 19:16:47.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:47.559023 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:16:47.568854 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (834) Feb 12 19:16:47.570857 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:16:47.570880 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:16:47.570890 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:16:47.577637 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:16:47.579590 systemd[1]: Starting ignition-files.service... Feb 12 19:16:47.596490 ignition[854]: INFO : Ignition 2.14.0 Feb 12 19:16:47.596490 ignition[854]: INFO : Stage: files Feb 12 19:16:47.598042 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:16:47.598042 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:16:47.598042 ignition[854]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:16:47.605624 ignition[854]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:16:47.605624 ignition[854]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:16:47.612994 ignition[854]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:16:47.614182 ignition[854]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:16:47.615605 unknown[854]: wrote ssh authorized keys file for user: core Feb 12 19:16:47.616747 ignition[854]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:16:47.616747 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:16:47.616747 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 12 19:16:47.657841 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:16:47.730605 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:16:47.730605 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:16:47.734278 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:16:48.040747 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:16:48.176253 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 12 19:16:48.178593 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:16:48.178593 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:16:48.178593 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 12 19:16:48.413377 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:16:48.656097 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 12 19:16:48.656097 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:16:48.661256 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:16:48.661256 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:16:48.661256 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:16:48.661256 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl: attempt #1 Feb 12 19:16:48.704494 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:16:48.735085 systemd-networkd[738]: eth0: Gained IPv6LL Feb 12 19:16:49.130944 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 6a5c9c02a29126949f096415bb1761a0c0ad44168e2ab3d0409982701da58f96223bec354828ddf958e945ef1ce63c0ad41e77cbcbcce0756163e71b4fbae432 Feb 12 19:16:49.133313 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:16:49.133313 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:16:49.133313 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:16:49.155082 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:16:50.051090 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Feb 12 19:16:50.053700 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:16:50.053700 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:16:50.053700 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:16:50.074236 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 19:16:50.452857 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Feb 12 19:16:50.452857 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:16:50.456109 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:16:50.456109 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 12 19:16:50.691274 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 19:16:50.753923 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:16:50.753923 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:16:50.756299 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:16:50.756299 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:16:50.756299 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:16:50.756299 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:16:50.756299 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:16:50.756299 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:16:50.756299 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:16:50.773294 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:16:50.773294 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(10): [started] processing unit "prepare-critools.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(10): op(11): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(10): [finished] processing unit "prepare-critools.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:16:50.773294 ignition[854]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:16:50.800262 ignition[854]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:16:50.800262 ignition[854]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:16:50.800262 ignition[854]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:16:50.800262 ignition[854]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:16:50.800262 ignition[854]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:16:50.800262 ignition[854]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:16:50.816195 ignition[854]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:16:50.817339 ignition[854]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:16:50.817339 ignition[854]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:16:50.817339 ignition[854]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:16:50.817339 ignition[854]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:16:50.817339 ignition[854]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:16:50.817339 ignition[854]: INFO : files: files passed Feb 12 19:16:50.817339 ignition[854]: INFO : Ignition finished successfully Feb 12 19:16:50.830468 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 12 19:16:50.830490 kernel: audit: type=1130 audit(1707765410.821:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.819940 systemd[1]: Finished ignition-files.service. Feb 12 19:16:50.825297 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:16:50.826624 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:16:50.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.836523 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:16:50.842385 kernel: audit: type=1130 audit(1707765410.832:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.842408 kernel: audit: type=1131 audit(1707765410.833:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.842419 kernel: audit: type=1130 audit(1707765410.839:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.827328 systemd[1]: Starting ignition-quench.service... Feb 12 19:16:50.843598 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:16:50.831395 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:16:50.831488 systemd[1]: Finished ignition-quench.service. Feb 12 19:16:50.835197 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:16:50.839764 systemd[1]: Reached target ignition-complete.target. Feb 12 19:16:50.843885 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:16:50.857556 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:16:50.857656 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:16:50.862954 kernel: audit: type=1130 audit(1707765410.858:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.862977 kernel: audit: type=1131 audit(1707765410.858:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.859159 systemd[1]: Reached target initrd-fs.target. Feb 12 19:16:50.863670 systemd[1]: Reached target initrd.target. Feb 12 19:16:50.864781 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:16:50.865623 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:16:50.876862 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:16:50.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.878499 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:16:50.880992 kernel: audit: type=1130 audit(1707765410.876:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.887126 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:16:50.888164 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:16:50.889388 systemd[1]: Stopped target timers.target. Feb 12 19:16:50.890534 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:16:50.893834 kernel: audit: type=1131 audit(1707765410.890:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.890649 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:16:50.891712 systemd[1]: Stopped target initrd.target. Feb 12 19:16:50.894571 systemd[1]: Stopped target basic.target. Feb 12 19:16:50.895624 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:16:50.896553 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:16:50.898262 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:16:50.899868 systemd[1]: Stopped target remote-fs.target. Feb 12 19:16:50.901039 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:16:50.902186 systemd[1]: Stopped target sysinit.target. Feb 12 19:16:50.903315 systemd[1]: Stopped target local-fs.target. Feb 12 19:16:50.904336 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:16:50.905493 systemd[1]: Stopped target swap.target. Feb 12 19:16:50.906558 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:16:50.909945 kernel: audit: type=1131 audit(1707765410.906:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.906679 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:16:50.907585 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:16:50.913856 kernel: audit: type=1131 audit(1707765410.910:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.910622 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:16:50.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.910729 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:16:50.911805 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:16:50.911920 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:16:50.914796 systemd[1]: Stopped target paths.target. Feb 12 19:16:50.915700 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:16:50.916864 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:16:50.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.917755 systemd[1]: Stopped target slices.target. Feb 12 19:16:50.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.918935 systemd[1]: Stopped target sockets.target. Feb 12 19:16:50.920018 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:16:50.920133 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:16:50.925826 iscsid[744]: iscsid shutting down. Feb 12 19:16:50.921623 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:16:50.921716 systemd[1]: Stopped ignition-files.service. Feb 12 19:16:50.923674 systemd[1]: Stopping ignition-mount.service... Feb 12 19:16:50.925949 systemd[1]: Stopping iscsid.service... Feb 12 19:16:50.927868 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:16:50.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.932475 ignition[895]: INFO : Ignition 2.14.0 Feb 12 19:16:50.932475 ignition[895]: INFO : Stage: umount Feb 12 19:16:50.932475 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:16:50.932475 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:16:50.932475 ignition[895]: INFO : umount: umount passed Feb 12 19:16:50.932475 ignition[895]: INFO : Ignition finished successfully Feb 12 19:16:50.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.928877 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:16:50.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.929009 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:16:50.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.930419 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:16:50.930517 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:16:50.932770 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:16:50.932888 systemd[1]: Stopped iscsid.service. Feb 12 19:16:50.934357 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:16:50.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.934439 systemd[1]: Stopped ignition-mount.service. Feb 12 19:16:50.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.936251 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:16:50.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.936332 systemd[1]: Closed iscsid.socket. Feb 12 19:16:50.937685 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:16:50.937738 systemd[1]: Stopped ignition-disks.service. Feb 12 19:16:50.938958 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:16:50.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.938998 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:16:50.940472 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:16:50.940525 systemd[1]: Stopped ignition-setup.service. Feb 12 19:16:50.942133 systemd[1]: Stopping iscsiuio.service... Feb 12 19:16:50.944493 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:16:50.944944 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:16:50.945028 systemd[1]: Stopped iscsiuio.service. Feb 12 19:16:50.946472 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:16:50.946699 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:16:50.948263 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:16:50.948339 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:16:50.949844 systemd[1]: Stopped target network.target. Feb 12 19:16:50.950575 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:16:50.950608 systemd[1]: Closed iscsiuio.socket. Feb 12 19:16:50.951565 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:16:50.951602 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:16:50.952645 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:16:50.953848 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:16:50.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.962857 systemd-networkd[738]: eth0: DHCPv6 lease lost Feb 12 19:16:50.963605 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:16:50.963702 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:16:50.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.967000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:16:50.965324 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:16:50.965409 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:16:50.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.966311 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:16:50.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.966336 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:16:50.971000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:16:50.967873 systemd[1]: Stopping network-cleanup.service... Feb 12 19:16:50.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.968862 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:16:50.968916 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:16:50.970196 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:16:50.970240 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:16:50.971761 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:16:50.971806 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:16:50.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.973272 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:16:50.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.977967 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:16:50.980436 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:16:50.980538 systemd[1]: Stopped network-cleanup.service. Feb 12 19:16:50.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.981993 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:16:50.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.982105 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:16:50.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.983240 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:16:50.983273 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:16:50.984166 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:16:50.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.984201 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:16:50.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.985607 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:16:50.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.985656 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:16:50.986944 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:16:50.986989 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:16:50.988202 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:16:50.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:50.988244 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:16:50.990326 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:16:50.991294 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:16:50.991348 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:16:50.993422 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:16:50.993461 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:16:50.994248 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:16:50.994284 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:16:50.996280 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 19:16:50.996707 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:16:50.996786 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:16:50.998364 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:16:51.000281 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:16:51.007147 systemd[1]: Switching root. Feb 12 19:16:51.021123 systemd-journald[289]: Journal stopped Feb 12 19:16:53.227161 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Feb 12 19:16:53.227231 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:16:53.227245 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:16:53.227254 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:16:53.227268 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:16:53.227277 kernel: SELinux: policy capability open_perms=1 Feb 12 19:16:53.227287 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:16:53.227296 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:16:53.227305 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:16:53.227315 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:16:53.227327 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:16:53.227337 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:16:53.227372 systemd[1]: Successfully loaded SELinux policy in 34.685ms. Feb 12 19:16:53.227393 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.982ms. Feb 12 19:16:53.227405 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:16:53.227417 systemd[1]: Detected virtualization kvm. Feb 12 19:16:53.227428 systemd[1]: Detected architecture arm64. Feb 12 19:16:53.227439 systemd[1]: Detected first boot. Feb 12 19:16:53.227451 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:16:53.227462 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:16:53.227472 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:16:53.227483 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:16:53.227494 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:16:53.227514 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:16:53.227526 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:16:53.227537 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:16:53.227548 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:16:53.227558 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:16:53.227568 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:16:53.227579 systemd[1]: Created slice system-getty.slice. Feb 12 19:16:53.227591 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:16:53.227601 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:16:53.227612 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:16:53.227625 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:16:53.227636 systemd[1]: Created slice user.slice. Feb 12 19:16:53.227646 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:16:53.227656 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:16:53.227666 systemd[1]: Set up automount boot.automount. Feb 12 19:16:53.227677 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:16:53.227687 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:16:53.227697 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:16:53.227709 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:16:53.227720 systemd[1]: Reached target integritysetup.target. Feb 12 19:16:53.227731 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:16:53.227742 systemd[1]: Reached target remote-fs.target. Feb 12 19:16:53.227752 systemd[1]: Reached target slices.target. Feb 12 19:16:53.227763 systemd[1]: Reached target swap.target. Feb 12 19:16:53.227773 systemd[1]: Reached target torcx.target. Feb 12 19:16:53.227783 systemd[1]: Reached target veritysetup.target. Feb 12 19:16:53.227794 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:16:53.227806 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:16:53.227823 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:16:53.227835 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:16:53.227846 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:16:53.227857 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:16:53.227867 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:16:53.227878 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:16:53.227889 systemd[1]: Mounting media.mount... Feb 12 19:16:53.227899 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:16:53.227909 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:16:53.227921 systemd[1]: Mounting tmp.mount... Feb 12 19:16:53.227932 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:16:53.227942 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:16:53.227953 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:16:53.227964 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:16:53.227974 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:16:53.227984 systemd[1]: Starting modprobe@drm.service... Feb 12 19:16:53.227995 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:16:53.228005 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:16:53.228017 systemd[1]: Starting modprobe@loop.service... Feb 12 19:16:53.228032 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:16:53.228043 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:16:53.228054 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:16:53.228065 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:16:53.228096 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:16:53.228107 systemd[1]: Stopped systemd-journald.service. Feb 12 19:16:53.228117 systemd[1]: Starting systemd-journald.service... Feb 12 19:16:53.228128 kernel: fuse: init (API version 7.34) Feb 12 19:16:53.228139 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:16:53.228154 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:16:53.228164 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:16:53.228174 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:16:53.228185 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:16:53.228195 systemd[1]: Stopped verity-setup.service. Feb 12 19:16:53.228204 kernel: loop: module loaded Feb 12 19:16:53.228216 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:16:53.228228 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:16:53.228242 systemd[1]: Mounted media.mount. Feb 12 19:16:53.228260 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:16:53.228273 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:16:53.228285 systemd[1]: Mounted tmp.mount. Feb 12 19:16:53.228296 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:16:53.228310 systemd-journald[986]: Journal started Feb 12 19:16:53.228353 systemd-journald[986]: Runtime Journal (/run/log/journal/a5e2807acca749d192d00ccb20252880) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:16:51.102000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:16:51.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:16:51.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:16:51.281000 audit: BPF prog-id=10 op=LOAD Feb 12 19:16:51.281000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:16:51.281000 audit: BPF prog-id=11 op=LOAD Feb 12 19:16:51.281000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:16:51.331000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:16:51.331000 audit[928]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400018d8dc a1=4000028e40 a2=4000027100 a3=32 items=0 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:16:51.331000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:16:51.332000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:16:51.332000 audit[928]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400018d9b5 a2=1ed a3=0 items=2 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:16:51.332000 audit: CWD cwd="/" Feb 12 19:16:51.332000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:16:51.332000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:16:51.332000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:16:53.228876 systemd[1]: Started systemd-journald.service. Feb 12 19:16:53.089000 audit: BPF prog-id=12 op=LOAD Feb 12 19:16:53.089000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:16:53.089000 audit: BPF prog-id=13 op=LOAD Feb 12 19:16:53.089000 audit: BPF prog-id=14 op=LOAD Feb 12 19:16:53.089000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:16:53.089000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:16:53.090000 audit: BPF prog-id=15 op=LOAD Feb 12 19:16:53.090000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:16:53.090000 audit: BPF prog-id=16 op=LOAD Feb 12 19:16:53.090000 audit: BPF prog-id=17 op=LOAD Feb 12 19:16:53.090000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:16:53.090000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:16:53.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.101000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:16:53.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.191000 audit: BPF prog-id=18 op=LOAD Feb 12 19:16:53.193000 audit: BPF prog-id=19 op=LOAD Feb 12 19:16:53.193000 audit: BPF prog-id=20 op=LOAD Feb 12 19:16:53.193000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:16:53.193000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:16:53.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.225000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:16:53.225000 audit[986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffff6b4b610 a2=4000 a3=1 items=0 ppid=1 pid=986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:16:53.225000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:16:53.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:51.329876 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:16:53.089090 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:16:51.330231 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:16:53.089103 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:16:51.330251 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:16:53.092209 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:16:51.330284 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:16:51.330294 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:16:51.330325 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:16:53.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:51.330337 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:16:51.330593 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:16:51.330659 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:16:51.330679 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:16:51.331362 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:16:51.331396 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:16:53.230512 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:16:51.331414 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:16:51.331428 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:16:51.331445 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:16:51.331460 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:16:52.808521 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:52Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:16:53.230988 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:16:52.808797 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:52Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:16:52.808913 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:52Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:16:52.809074 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:52Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:16:52.809125 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:52Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:16:52.809182 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2024-02-12T19:16:52Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:16:53.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.232373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:16:53.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.232790 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:16:53.233919 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:16:53.234044 systemd[1]: Finished modprobe@drm.service. Feb 12 19:16:53.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.235110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:16:53.235311 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:16:53.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.236575 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:16:53.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.236725 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:16:53.237767 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:16:53.237904 systemd[1]: Finished modprobe@loop.service. Feb 12 19:16:53.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.239117 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:16:53.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.241041 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:16:53.242117 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:16:53.243404 systemd[1]: Reached target network-pre.target. Feb 12 19:16:53.245569 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:16:53.247535 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:16:53.248337 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:16:53.250490 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:16:53.252700 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:16:53.253682 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:16:53.255000 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:16:53.255859 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:16:53.257088 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:16:53.259339 systemd-journald[986]: Time spent on flushing to /var/log/journal/a5e2807acca749d192d00ccb20252880 is 18.216ms for 1029 entries. Feb 12 19:16:53.259339 systemd-journald[986]: System Journal (/var/log/journal/a5e2807acca749d192d00ccb20252880) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:16:53.322881 systemd-journald[986]: Received client request to flush runtime journal. Feb 12 19:16:53.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.260897 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:16:53.262036 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:16:53.324387 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:16:53.263179 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:16:53.265394 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:16:53.268229 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:16:53.269236 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:16:53.281216 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:16:53.285719 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:16:53.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.288001 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:16:53.314903 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:16:53.317061 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:16:53.324069 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:16:53.341453 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:16:53.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.643771 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:16:53.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.644000 audit: BPF prog-id=21 op=LOAD Feb 12 19:16:53.644000 audit: BPF prog-id=22 op=LOAD Feb 12 19:16:53.644000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:16:53.644000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:16:53.645968 systemd[1]: Starting systemd-udevd.service... Feb 12 19:16:53.661380 systemd-udevd[1035]: Using default interface naming scheme 'v252'. Feb 12 19:16:53.679222 systemd[1]: Started systemd-udevd.service. Feb 12 19:16:53.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.685000 audit: BPF prog-id=23 op=LOAD Feb 12 19:16:53.687918 systemd[1]: Starting systemd-networkd.service... Feb 12 19:16:53.694000 audit: BPF prog-id=24 op=LOAD Feb 12 19:16:53.694000 audit: BPF prog-id=25 op=LOAD Feb 12 19:16:53.694000 audit: BPF prog-id=26 op=LOAD Feb 12 19:16:53.696085 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:16:53.703509 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 12 19:16:53.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.738668 systemd[1]: Started systemd-userdbd.service. Feb 12 19:16:53.746997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:16:53.798186 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:16:53.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.800262 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:16:53.802873 systemd-networkd[1055]: lo: Link UP Feb 12 19:16:53.802880 systemd-networkd[1055]: lo: Gained carrier Feb 12 19:16:53.808673 systemd-networkd[1055]: Enumeration completed Feb 12 19:16:53.808797 systemd[1]: Started systemd-networkd.service. Feb 12 19:16:53.808802 systemd-networkd[1055]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:16:53.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.817529 systemd-networkd[1055]: eth0: Link UP Feb 12 19:16:53.817539 systemd-networkd[1055]: eth0: Gained carrier Feb 12 19:16:53.818884 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:16:53.840952 systemd-networkd[1055]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:16:53.850648 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:16:53.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.851487 systemd[1]: Reached target cryptsetup.target. Feb 12 19:16:53.853268 systemd[1]: Starting lvm2-activation.service... Feb 12 19:16:53.856851 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:16:53.884744 systemd[1]: Finished lvm2-activation.service. Feb 12 19:16:53.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.885546 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:16:53.886177 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:16:53.886205 systemd[1]: Reached target local-fs.target. Feb 12 19:16:53.886745 systemd[1]: Reached target machines.target. Feb 12 19:16:53.888537 systemd[1]: Starting ldconfig.service... Feb 12 19:16:53.889476 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:16:53.889541 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:16:53.890602 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:16:53.892366 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:16:53.894947 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:16:53.896429 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:16:53.896488 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:16:53.897905 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:16:53.900267 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) Feb 12 19:16:53.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.904363 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:16:53.905730 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:16:53.911919 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:16:53.912963 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:16:53.917107 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:16:53.992262 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) Feb 12 19:16:53.992262 systemd-fsck[1080]: /dev/vda1: 236 files, 113719/258078 clusters Feb 12 19:16:53.995117 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:16:53.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:53.997628 systemd[1]: Mounting boot.mount... Feb 12 19:16:54.024261 systemd[1]: Mounted boot.mount. Feb 12 19:16:54.054056 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:16:54.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:54.059202 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:16:54.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:54.110933 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:16:54.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:54.113252 systemd[1]: Starting audit-rules.service... Feb 12 19:16:54.114814 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:16:54.116525 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:16:54.117000 audit: BPF prog-id=27 op=LOAD Feb 12 19:16:54.119071 systemd[1]: Starting systemd-resolved.service... Feb 12 19:16:54.119000 audit: BPF prog-id=28 op=LOAD Feb 12 19:16:54.121719 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:16:54.123448 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:16:54.134871 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:16:54.134000 audit[1089]: SYSTEM_BOOT pid=1089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:16:54.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:54.137446 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:16:54.138614 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:16:54.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:54.156203 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:16:54.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:54.159876 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:16:54.168856 systemd[1]: Finished ldconfig.service. Feb 12 19:16:54.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:54.170902 systemd[1]: Starting systemd-update-done.service... Feb 12 19:16:54.174501 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:16:54.175184 systemd-timesyncd[1088]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:16:54.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:54.175238 systemd-timesyncd[1088]: Initial clock synchronization to Mon 2024-02-12 19:16:54.573853 UTC. Feb 12 19:16:54.175460 systemd[1]: Reached target time-set.target. Feb 12 19:16:54.180602 systemd-resolved[1086]: Positive Trust Anchors: Feb 12 19:16:54.180905 systemd-resolved[1086]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:16:54.180995 systemd-resolved[1086]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:16:54.182759 systemd[1]: Finished systemd-update-done.service. Feb 12 19:16:54.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:54.206561 systemd-resolved[1086]: Defaulting to hostname 'linux'. Feb 12 19:16:54.208137 systemd[1]: Started systemd-resolved.service. Feb 12 19:16:54.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:16:54.210319 systemd[1]: Reached target network.target. Feb 12 19:16:54.210917 systemd[1]: Reached target nss-lookup.target. Feb 12 19:16:54.211000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:16:54.211000 audit[1105]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff969cdd0 a2=420 a3=0 items=0 ppid=1083 pid=1105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:16:54.211000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:16:54.212985 augenrules[1105]: No rules Feb 12 19:16:54.213732 systemd[1]: Finished audit-rules.service. Feb 12 19:16:54.214612 systemd[1]: Reached target sysinit.target. Feb 12 19:16:54.215316 systemd[1]: Started motdgen.path. Feb 12 19:16:54.215850 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:16:54.216759 systemd[1]: Started logrotate.timer. Feb 12 19:16:54.217461 systemd[1]: Started mdadm.timer. Feb 12 19:16:54.218031 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:16:54.218614 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:16:54.218642 systemd[1]: Reached target paths.target. Feb 12 19:16:54.219176 systemd[1]: Reached target timers.target. Feb 12 19:16:54.220035 systemd[1]: Listening on dbus.socket. Feb 12 19:16:54.221671 systemd[1]: Starting docker.socket... Feb 12 19:16:54.224965 systemd[1]: Listening on sshd.socket. Feb 12 19:16:54.225628 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:16:54.226881 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:16:54.227271 systemd[1]: Listening on docker.socket. Feb 12 19:16:54.228096 systemd[1]: Reached target sockets.target. Feb 12 19:16:54.228838 systemd[1]: Reached target basic.target. Feb 12 19:16:54.229634 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:16:54.229664 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:16:54.230713 systemd[1]: Starting containerd.service... Feb 12 19:16:54.232522 systemd[1]: Starting dbus.service... Feb 12 19:16:54.234236 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:16:54.236033 systemd[1]: Starting extend-filesystems.service... Feb 12 19:16:54.236938 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:16:54.238194 systemd[1]: Starting motdgen.service... Feb 12 19:16:54.240119 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:16:54.244216 jq[1114]: false Feb 12 19:16:54.244651 systemd[1]: Starting prepare-critools.service... Feb 12 19:16:54.246372 systemd[1]: Starting prepare-helm.service... Feb 12 19:16:54.248335 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:16:54.250375 systemd[1]: Starting sshd-keygen.service... Feb 12 19:16:54.253693 systemd[1]: Starting systemd-logind.service... Feb 12 19:16:54.254355 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:16:54.254448 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:16:54.254939 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:16:54.256201 systemd[1]: Starting update-engine.service... Feb 12 19:16:54.258489 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:16:54.262509 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:16:54.262770 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:16:54.263167 jq[1130]: true Feb 12 19:16:54.266561 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:16:54.266855 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:16:54.269377 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:16:54.269638 systemd[1]: Finished motdgen.service. Feb 12 19:16:54.270875 jq[1139]: true Feb 12 19:16:54.282813 tar[1138]: linux-arm64/helm Feb 12 19:16:54.283081 extend-filesystems[1115]: Found vda Feb 12 19:16:54.283081 extend-filesystems[1115]: Found vda1 Feb 12 19:16:54.283081 extend-filesystems[1115]: Found vda2 Feb 12 19:16:54.283081 extend-filesystems[1115]: Found vda3 Feb 12 19:16:54.283081 extend-filesystems[1115]: Found usr Feb 12 19:16:54.283081 extend-filesystems[1115]: Found vda4 Feb 12 19:16:54.283081 extend-filesystems[1115]: Found vda6 Feb 12 19:16:54.283081 extend-filesystems[1115]: Found vda7 Feb 12 19:16:54.283081 extend-filesystems[1115]: Found vda9 Feb 12 19:16:54.283081 extend-filesystems[1115]: Checking size of /dev/vda9 Feb 12 19:16:54.294635 tar[1136]: ./ Feb 12 19:16:54.294635 tar[1136]: ./loopback Feb 12 19:16:54.294876 tar[1137]: crictl Feb 12 19:16:54.312023 dbus-daemon[1113]: [system] SELinux support is enabled Feb 12 19:16:54.312259 systemd[1]: Started dbus.service. Feb 12 19:16:54.314951 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:16:54.314986 systemd[1]: Reached target system-config.target. Feb 12 19:16:54.315730 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:16:54.315752 systemd[1]: Reached target user-config.target. Feb 12 19:16:54.349734 systemd-logind[1126]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 19:16:54.350590 systemd-logind[1126]: New seat seat0. Feb 12 19:16:54.350748 extend-filesystems[1115]: Resized partition /dev/vda9 Feb 12 19:16:54.356919 extend-filesystems[1170]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:16:54.359709 systemd[1]: Started systemd-logind.service. Feb 12 19:16:54.369903 tar[1136]: ./bandwidth Feb 12 19:16:54.373835 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:16:54.385372 update_engine[1127]: I0212 19:16:54.380276 1127 main.cc:92] Flatcar Update Engine starting Feb 12 19:16:54.391731 systemd[1]: Started update-engine.service. Feb 12 19:16:54.411207 update_engine[1127]: I0212 19:16:54.391752 1127 update_check_scheduler.cc:74] Next update check in 3m55s Feb 12 19:16:54.394936 systemd[1]: Started locksmithd.service. Feb 12 19:16:54.411588 env[1141]: time="2024-02-12T19:16:54.411488480Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:16:54.418470 bash[1165]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:16:54.419348 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:16:54.424855 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:16:54.433253 env[1141]: time="2024-02-12T19:16:54.433206200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:16:54.454051 env[1141]: time="2024-02-12T19:16:54.450811800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:16:54.455053 extend-filesystems[1170]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:16:54.455053 extend-filesystems[1170]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:16:54.455053 extend-filesystems[1170]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:16:54.462263 extend-filesystems[1115]: Resized filesystem in /dev/vda9 Feb 12 19:16:54.463118 env[1141]: time="2024-02-12T19:16:54.459941240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:16:54.463118 env[1141]: time="2024-02-12T19:16:54.459979440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:16:54.463118 env[1141]: time="2024-02-12T19:16:54.460220720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:16:54.463118 env[1141]: time="2024-02-12T19:16:54.460238880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:16:54.463118 env[1141]: time="2024-02-12T19:16:54.460251920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:16:54.463118 env[1141]: time="2024-02-12T19:16:54.460262200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:16:54.463118 env[1141]: time="2024-02-12T19:16:54.460333400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:16:54.463118 env[1141]: time="2024-02-12T19:16:54.460622560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:16:54.463118 env[1141]: time="2024-02-12T19:16:54.460745480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:16:54.463118 env[1141]: time="2024-02-12T19:16:54.460763080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:16:54.458466 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:16:54.463420 env[1141]: time="2024-02-12T19:16:54.460833400Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:16:54.463420 env[1141]: time="2024-02-12T19:16:54.460847760Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:16:54.458674 systemd[1]: Finished extend-filesystems.service. Feb 12 19:16:54.476359 tar[1136]: ./ptp Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.492867800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.492921160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.492935920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.492974840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.492993280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.493007720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.493023400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.493362360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.493381160Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.493393680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.493406760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.493445200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.493610560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:16:54.493960 env[1141]: time="2024-02-12T19:16:54.493684440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494001160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494048000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494063440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494222480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494237240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494250440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494262640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494275400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494288720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494299480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494310880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494334 env[1141]: time="2024-02-12T19:16:54.494326280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:16:54.494583 env[1141]: time="2024-02-12T19:16:54.494464440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494583 env[1141]: time="2024-02-12T19:16:54.494481160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494583 env[1141]: time="2024-02-12T19:16:54.494502640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494583 env[1141]: time="2024-02-12T19:16:54.494518720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:16:54.494583 env[1141]: time="2024-02-12T19:16:54.494533600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:16:54.494583 env[1141]: time="2024-02-12T19:16:54.494545840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:16:54.494583 env[1141]: time="2024-02-12T19:16:54.494563720Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:16:54.494750 env[1141]: time="2024-02-12T19:16:54.494597280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:16:54.494862 env[1141]: time="2024-02-12T19:16:54.494791000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.494866360Z" level=info msg="Connect containerd service" Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.494896720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.495868360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.496167760Z" level=info msg="Start subscribing containerd event" Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.496225960Z" level=info msg="Start recovering state" Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.496297400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.496310200Z" level=info msg="Start event monitor" Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.496334680Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.496331280Z" level=info msg="Start snapshots syncer" Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.496350120Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.496357920Z" level=info msg="Start streaming server" Feb 12 19:16:54.498753 env[1141]: time="2024-02-12T19:16:54.497354720Z" level=info msg="containerd successfully booted in 0.089855s" Feb 12 19:16:54.496470 systemd[1]: Started containerd.service. Feb 12 19:16:54.519300 tar[1136]: ./vlan Feb 12 19:16:54.553943 tar[1136]: ./host-device Feb 12 19:16:54.587725 tar[1136]: ./tuning Feb 12 19:16:54.617908 tar[1136]: ./vrf Feb 12 19:16:54.648868 tar[1136]: ./sbr Feb 12 19:16:54.679489 tar[1136]: ./tap Feb 12 19:16:54.714841 tar[1136]: ./dhcp Feb 12 19:16:54.791548 tar[1138]: linux-arm64/LICENSE Feb 12 19:16:54.791668 tar[1138]: linux-arm64/README.md Feb 12 19:16:54.795935 systemd[1]: Finished prepare-helm.service. Feb 12 19:16:54.802831 tar[1136]: ./static Feb 12 19:16:54.803967 systemd[1]: Finished prepare-critools.service. Feb 12 19:16:54.828479 tar[1136]: ./firewall Feb 12 19:16:54.829964 locksmithd[1171]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:16:54.860696 tar[1136]: ./macvlan Feb 12 19:16:54.889773 tar[1136]: ./dummy Feb 12 19:16:54.918416 tar[1136]: ./bridge Feb 12 19:16:54.949568 tar[1136]: ./ipvlan Feb 12 19:16:54.978121 tar[1136]: ./portmap Feb 12 19:16:55.005610 tar[1136]: ./host-local Feb 12 19:16:55.040936 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:16:55.842007 systemd-networkd[1055]: eth0: Gained IPv6LL Feb 12 19:16:56.288167 sshd_keygen[1140]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:16:56.306151 systemd[1]: Finished sshd-keygen.service. Feb 12 19:16:56.308468 systemd[1]: Starting issuegen.service... Feb 12 19:16:56.313321 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:16:56.313488 systemd[1]: Finished issuegen.service. Feb 12 19:16:56.316998 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:16:56.323509 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:16:56.325822 systemd[1]: Started getty@tty1.service. Feb 12 19:16:56.328024 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:16:56.329129 systemd[1]: Reached target getty.target. Feb 12 19:16:56.329823 systemd[1]: Reached target multi-user.target. Feb 12 19:16:56.331713 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:16:56.338670 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:16:56.338848 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:16:56.339919 systemd[1]: Startup finished in 610ms (kernel) + 6.481s (initrd) + 5.280s (userspace) = 12.372s. Feb 12 19:16:57.775052 systemd[1]: Created slice system-sshd.slice. Feb 12 19:16:57.776879 systemd[1]: Started sshd@0-10.0.0.60:22-10.0.0.1:53214.service. Feb 12 19:16:57.830475 sshd[1202]: Accepted publickey for core from 10.0.0.1 port 53214 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:16:57.832587 sshd[1202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:16:57.845550 systemd-logind[1126]: New session 1 of user core. Feb 12 19:16:57.847617 systemd[1]: Created slice user-500.slice. Feb 12 19:16:57.849175 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:16:57.858206 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:16:57.860248 systemd[1]: Starting user@500.service... Feb 12 19:16:57.864966 (systemd)[1205]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:16:57.947630 systemd[1205]: Queued start job for default target default.target. Feb 12 19:16:57.948213 systemd[1205]: Reached target paths.target. Feb 12 19:16:57.948235 systemd[1205]: Reached target sockets.target. Feb 12 19:16:57.948247 systemd[1205]: Reached target timers.target. Feb 12 19:16:57.948258 systemd[1205]: Reached target basic.target. Feb 12 19:16:57.948313 systemd[1205]: Reached target default.target. Feb 12 19:16:57.948338 systemd[1205]: Startup finished in 76ms. Feb 12 19:16:57.948547 systemd[1]: Started user@500.service. Feb 12 19:16:57.949536 systemd[1]: Started session-1.scope. Feb 12 19:16:58.004009 systemd[1]: Started sshd@1-10.0.0.60:22-10.0.0.1:53218.service. Feb 12 19:16:58.048934 sshd[1214]: Accepted publickey for core from 10.0.0.1 port 53218 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:16:58.050156 sshd[1214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:16:58.054286 systemd-logind[1126]: New session 2 of user core. Feb 12 19:16:58.055231 systemd[1]: Started session-2.scope. Feb 12 19:16:58.119564 sshd[1214]: pam_unix(sshd:session): session closed for user core Feb 12 19:16:58.122484 systemd[1]: sshd@1-10.0.0.60:22-10.0.0.1:53218.service: Deactivated successfully. Feb 12 19:16:58.123255 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:16:58.123762 systemd-logind[1126]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:16:58.125242 systemd[1]: Started sshd@2-10.0.0.60:22-10.0.0.1:53230.service. Feb 12 19:16:58.125958 systemd-logind[1126]: Removed session 2. Feb 12 19:16:58.162752 sshd[1220]: Accepted publickey for core from 10.0.0.1 port 53230 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:16:58.164232 sshd[1220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:16:58.167922 systemd-logind[1126]: New session 3 of user core. Feb 12 19:16:58.168822 systemd[1]: Started session-3.scope. Feb 12 19:16:58.219287 sshd[1220]: pam_unix(sshd:session): session closed for user core Feb 12 19:16:58.223171 systemd[1]: Started sshd@3-10.0.0.60:22-10.0.0.1:53240.service. Feb 12 19:16:58.223694 systemd[1]: sshd@2-10.0.0.60:22-10.0.0.1:53230.service: Deactivated successfully. Feb 12 19:16:58.224409 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:16:58.224934 systemd-logind[1126]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:16:58.225835 systemd-logind[1126]: Removed session 3. Feb 12 19:16:58.258310 sshd[1225]: Accepted publickey for core from 10.0.0.1 port 53240 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:16:58.260770 sshd[1225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:16:58.264738 systemd-logind[1126]: New session 4 of user core. Feb 12 19:16:58.265774 systemd[1]: Started session-4.scope. Feb 12 19:16:58.324749 sshd[1225]: pam_unix(sshd:session): session closed for user core Feb 12 19:16:58.327495 systemd[1]: sshd@3-10.0.0.60:22-10.0.0.1:53240.service: Deactivated successfully. Feb 12 19:16:58.328156 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:16:58.328717 systemd-logind[1126]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:16:58.329833 systemd[1]: Started sshd@4-10.0.0.60:22-10.0.0.1:53250.service. Feb 12 19:16:58.330538 systemd-logind[1126]: Removed session 4. Feb 12 19:16:58.369191 sshd[1232]: Accepted publickey for core from 10.0.0.1 port 53250 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:16:58.370510 sshd[1232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:16:58.374208 systemd-logind[1126]: New session 5 of user core. Feb 12 19:16:58.375159 systemd[1]: Started session-5.scope. Feb 12 19:16:58.443147 sudo[1235]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:16:58.443362 sudo[1235]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:16:59.050683 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:16:59.174579 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:16:59.174982 systemd[1]: Reached target network-online.target. Feb 12 19:16:59.176525 systemd[1]: Starting docker.service... Feb 12 19:16:59.267310 env[1253]: time="2024-02-12T19:16:59.267246112Z" level=info msg="Starting up" Feb 12 19:16:59.268711 env[1253]: time="2024-02-12T19:16:59.268681747Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:16:59.268916 env[1253]: time="2024-02-12T19:16:59.268897019Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:16:59.269009 env[1253]: time="2024-02-12T19:16:59.268990924Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:16:59.269065 env[1253]: time="2024-02-12T19:16:59.269052101Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:16:59.271116 env[1253]: time="2024-02-12T19:16:59.271089211Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:16:59.271217 env[1253]: time="2024-02-12T19:16:59.271201643Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:16:59.271276 env[1253]: time="2024-02-12T19:16:59.271261749Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:16:59.271328 env[1253]: time="2024-02-12T19:16:59.271315145Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:16:59.525979 env[1253]: time="2024-02-12T19:16:59.525870464Z" level=info msg="Loading containers: start." Feb 12 19:16:59.620871 kernel: Initializing XFRM netlink socket Feb 12 19:16:59.644415 env[1253]: time="2024-02-12T19:16:59.644370832Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:16:59.699802 systemd-networkd[1055]: docker0: Link UP Feb 12 19:16:59.709446 env[1253]: time="2024-02-12T19:16:59.709410838Z" level=info msg="Loading containers: done." Feb 12 19:16:59.733537 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1035072712-merged.mount: Deactivated successfully. Feb 12 19:16:59.735419 env[1253]: time="2024-02-12T19:16:59.735378410Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:16:59.735741 env[1253]: time="2024-02-12T19:16:59.735718052Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:16:59.735967 env[1253]: time="2024-02-12T19:16:59.735949091Z" level=info msg="Daemon has completed initialization" Feb 12 19:16:59.749786 systemd[1]: Started docker.service. Feb 12 19:16:59.757575 env[1253]: time="2024-02-12T19:16:59.757521574Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:16:59.776468 systemd[1]: Reloading. Feb 12 19:16:59.834045 /usr/lib/systemd/system-generators/torcx-generator[1398]: time="2024-02-12T19:16:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:16:59.834074 /usr/lib/systemd/system-generators/torcx-generator[1398]: time="2024-02-12T19:16:59Z" level=info msg="torcx already run" Feb 12 19:16:59.887080 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:16:59.887101 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:16:59.905062 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:16:59.976128 systemd[1]: Started kubelet.service. Feb 12 19:17:00.159294 kubelet[1435]: E0212 19:17:00.158964 1435 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 19:17:00.163101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:17:00.163233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:17:00.470245 env[1141]: time="2024-02-12T19:17:00.470122946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 12 19:17:01.266308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3444378628.mount: Deactivated successfully. Feb 12 19:17:03.550536 env[1141]: time="2024-02-12T19:17:03.550485154Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:03.551934 env[1141]: time="2024-02-12T19:17:03.551903151Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:03.554314 env[1141]: time="2024-02-12T19:17:03.554277044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:03.555828 env[1141]: time="2024-02-12T19:17:03.555800700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:03.556696 env[1141]: time="2024-02-12T19:17:03.556660131Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb\"" Feb 12 19:17:03.565409 env[1141]: time="2024-02-12T19:17:03.565371741Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 12 19:17:05.882748 env[1141]: time="2024-02-12T19:17:05.882693198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:05.885351 env[1141]: time="2024-02-12T19:17:05.885312297Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:05.886955 env[1141]: time="2024-02-12T19:17:05.886922984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:05.889389 env[1141]: time="2024-02-12T19:17:05.889346917Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:05.890145 env[1141]: time="2024-02-12T19:17:05.890116275Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f\"" Feb 12 19:17:05.899380 env[1141]: time="2024-02-12T19:17:05.899337505Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 12 19:17:07.642744 env[1141]: time="2024-02-12T19:17:07.642690121Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:07.652054 env[1141]: time="2024-02-12T19:17:07.652007896Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:07.678149 env[1141]: time="2024-02-12T19:17:07.678080085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:07.691719 env[1141]: time="2024-02-12T19:17:07.691666610Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:07.692520 env[1141]: time="2024-02-12T19:17:07.692487974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663\"" Feb 12 19:17:07.702781 env[1141]: time="2024-02-12T19:17:07.702688362Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 12 19:17:08.756786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1081458558.mount: Deactivated successfully. Feb 12 19:17:09.380058 env[1141]: time="2024-02-12T19:17:09.380005666Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.384834 env[1141]: time="2024-02-12T19:17:09.384755497Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.386352 env[1141]: time="2024-02-12T19:17:09.386312614Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.387864 env[1141]: time="2024-02-12T19:17:09.387840831Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.388342 env[1141]: time="2024-02-12T19:17:09.388316136Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74\"" Feb 12 19:17:09.398344 env[1141]: time="2024-02-12T19:17:09.398306300Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:17:09.801320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1405736708.mount: Deactivated successfully. Feb 12 19:17:09.806064 env[1141]: time="2024-02-12T19:17:09.806017199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.807896 env[1141]: time="2024-02-12T19:17:09.807854897Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.809156 env[1141]: time="2024-02-12T19:17:09.809127726Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.810666 env[1141]: time="2024-02-12T19:17:09.810637361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.811244 env[1141]: time="2024-02-12T19:17:09.811213395Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 12 19:17:09.820141 env[1141]: time="2024-02-12T19:17:09.820085711Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 12 19:17:10.414049 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:17:10.414241 systemd[1]: Stopped kubelet.service. Feb 12 19:17:10.415727 systemd[1]: Started kubelet.service. Feb 12 19:17:10.428687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154728223.mount: Deactivated successfully. Feb 12 19:17:10.468080 kubelet[1490]: E0212 19:17:10.468021 1490 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 19:17:10.471008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:17:10.471154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:17:13.988395 env[1141]: time="2024-02-12T19:17:13.988349873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:13.996297 env[1141]: time="2024-02-12T19:17:13.996248100Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:13.997977 env[1141]: time="2024-02-12T19:17:13.997944831Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:14.000206 env[1141]: time="2024-02-12T19:17:14.000175115Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:14.001025 env[1141]: time="2024-02-12T19:17:14.000998710Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace\"" Feb 12 19:17:14.010779 env[1141]: time="2024-02-12T19:17:14.010740485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 12 19:17:14.642275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3155527487.mount: Deactivated successfully. Feb 12 19:17:15.284325 env[1141]: time="2024-02-12T19:17:15.284242915Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:15.285814 env[1141]: time="2024-02-12T19:17:15.285778594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:15.287335 env[1141]: time="2024-02-12T19:17:15.287306527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:15.288772 env[1141]: time="2024-02-12T19:17:15.288720267Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:15.289202 env[1141]: time="2024-02-12T19:17:15.289163992Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 12 19:17:19.185026 systemd[1]: Stopped kubelet.service. Feb 12 19:17:19.198610 systemd[1]: Reloading. Feb 12 19:17:19.240576 /usr/lib/systemd/system-generators/torcx-generator[1601]: time="2024-02-12T19:17:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:17:19.240941 /usr/lib/systemd/system-generators/torcx-generator[1601]: time="2024-02-12T19:17:19Z" level=info msg="torcx already run" Feb 12 19:17:19.296286 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:17:19.296304 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:17:19.313169 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:17:19.374773 systemd[1]: Started kubelet.service. Feb 12 19:17:19.412787 kubelet[1639]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:17:19.412787 kubelet[1639]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:17:19.412787 kubelet[1639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:17:19.413138 kubelet[1639]: I0212 19:17:19.412835 1639 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:17:20.034885 kubelet[1639]: I0212 19:17:20.034855 1639 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 19:17:20.035057 kubelet[1639]: I0212 19:17:20.035044 1639 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:17:20.035340 kubelet[1639]: I0212 19:17:20.035323 1639 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 19:17:20.040245 kubelet[1639]: I0212 19:17:20.040209 1639 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:17:20.040390 kubelet[1639]: E0212 19:17:20.040365 1639 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.045977 kubelet[1639]: W0212 19:17:20.045950 1639 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:17:20.046590 kubelet[1639]: I0212 19:17:20.046563 1639 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:17:20.046764 kubelet[1639]: I0212 19:17:20.046742 1639 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:17:20.046919 kubelet[1639]: I0212 19:17:20.046895 1639 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 19:17:20.046919 kubelet[1639]: I0212 19:17:20.046919 1639 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 19:17:20.047031 kubelet[1639]: I0212 19:17:20.046927 1639 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 19:17:20.047062 kubelet[1639]: I0212 19:17:20.047030 1639 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:17:20.047258 kubelet[1639]: I0212 19:17:20.047233 1639 kubelet.go:393] "Attempting to sync node with API server" Feb 12 19:17:20.047258 kubelet[1639]: I0212 19:17:20.047251 1639 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:17:20.047318 kubelet[1639]: I0212 19:17:20.047266 1639 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:17:20.047318 kubelet[1639]: I0212 19:17:20.047279 1639 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:17:20.047747 kubelet[1639]: W0212 19:17:20.047700 1639 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.047890 kubelet[1639]: W0212 19:17:20.047839 1639 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.048020 kubelet[1639]: E0212 19:17:20.047995 1639 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.048079 kubelet[1639]: E0212 19:17:20.047936 1639 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.048129 kubelet[1639]: I0212 19:17:20.048074 1639 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:17:20.048535 kubelet[1639]: W0212 19:17:20.048519 1639 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:17:20.049397 kubelet[1639]: I0212 19:17:20.049378 1639 server.go:1232] "Started kubelet" Feb 12 19:17:20.049705 kubelet[1639]: I0212 19:17:20.049685 1639 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:17:20.049983 kubelet[1639]: E0212 19:17:20.049874 1639 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b3339e9dd47cf5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 20, 49347829, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 20, 49347829, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.60:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.60:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:17:20.050132 kubelet[1639]: I0212 19:17:20.050102 1639 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 19:17:20.050206 kubelet[1639]: I0212 19:17:20.050134 1639 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:17:20.050437 kubelet[1639]: E0212 19:17:20.050409 1639 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:17:20.050437 kubelet[1639]: E0212 19:17:20.050436 1639 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:17:20.051031 kubelet[1639]: I0212 19:17:20.051005 1639 server.go:462] "Adding debug handlers to kubelet server" Feb 12 19:17:20.051838 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:17:20.052264 kubelet[1639]: I0212 19:17:20.052224 1639 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:17:20.052355 kubelet[1639]: I0212 19:17:20.052337 1639 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 19:17:20.052505 kubelet[1639]: E0212 19:17:20.052484 1639 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:17:20.052623 kubelet[1639]: I0212 19:17:20.052610 1639 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:17:20.052890 kubelet[1639]: I0212 19:17:20.052866 1639 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 19:17:20.053181 kubelet[1639]: W0212 19:17:20.053141 1639 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.053310 kubelet[1639]: E0212 19:17:20.053298 1639 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.053445 kubelet[1639]: E0212 19:17:20.053416 1639 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="200ms" Feb 12 19:17:20.067743 kubelet[1639]: I0212 19:17:20.067702 1639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 19:17:20.068618 kubelet[1639]: I0212 19:17:20.068582 1639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 19:17:20.068618 kubelet[1639]: I0212 19:17:20.068615 1639 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 19:17:20.068713 kubelet[1639]: I0212 19:17:20.068631 1639 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 19:17:20.068713 kubelet[1639]: E0212 19:17:20.068689 1639 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:17:20.069429 kubelet[1639]: W0212 19:17:20.069365 1639 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.069538 kubelet[1639]: E0212 19:17:20.069482 1639 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.072329 kubelet[1639]: I0212 19:17:20.072308 1639 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:17:20.072329 kubelet[1639]: I0212 19:17:20.072325 1639 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:17:20.072411 kubelet[1639]: I0212 19:17:20.072342 1639 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:17:20.074256 kubelet[1639]: I0212 19:17:20.074219 1639 policy_none.go:49] "None policy: Start" Feb 12 19:17:20.074830 kubelet[1639]: I0212 19:17:20.074784 1639 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:17:20.074830 kubelet[1639]: I0212 19:17:20.074822 1639 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:17:20.079324 systemd[1]: Created slice kubepods.slice. Feb 12 19:17:20.083117 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:17:20.085558 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:17:20.094480 kubelet[1639]: I0212 19:17:20.094451 1639 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:17:20.095002 kubelet[1639]: I0212 19:17:20.094975 1639 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:17:20.095453 kubelet[1639]: E0212 19:17:20.095435 1639 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 19:17:20.154639 kubelet[1639]: I0212 19:17:20.154604 1639 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:17:20.155011 kubelet[1639]: E0212 19:17:20.154983 1639 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Feb 12 19:17:20.169269 kubelet[1639]: I0212 19:17:20.169215 1639 topology_manager.go:215] "Topology Admit Handler" podUID="1e0b04285ffa2f5475654a163f591f39" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 12 19:17:20.170301 kubelet[1639]: I0212 19:17:20.170281 1639 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 12 19:17:20.171268 kubelet[1639]: I0212 19:17:20.171214 1639 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 12 19:17:20.175843 systemd[1]: Created slice kubepods-burstable-pod1e0b04285ffa2f5475654a163f591f39.slice. Feb 12 19:17:20.189301 systemd[1]: Created slice kubepods-burstable-pod212dcc5e2f08bec92c239ac5786b7e2b.slice. Feb 12 19:17:20.192878 systemd[1]: Created slice kubepods-burstable-podd0325d16aab19669b5fea4b6623890e6.slice. Feb 12 19:17:20.253951 kubelet[1639]: E0212 19:17:20.253901 1639 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="400ms" Feb 12 19:17:20.354284 kubelet[1639]: I0212 19:17:20.354181 1639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e0b04285ffa2f5475654a163f591f39-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e0b04285ffa2f5475654a163f591f39\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:17:20.354284 kubelet[1639]: I0212 19:17:20.354236 1639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e0b04285ffa2f5475654a163f591f39-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e0b04285ffa2f5475654a163f591f39\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:17:20.354284 kubelet[1639]: I0212 19:17:20.354263 1639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e0b04285ffa2f5475654a163f591f39-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1e0b04285ffa2f5475654a163f591f39\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:17:20.354432 kubelet[1639]: I0212 19:17:20.354285 1639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:17:20.354432 kubelet[1639]: I0212 19:17:20.354316 1639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:17:20.354432 kubelet[1639]: I0212 19:17:20.354336 1639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:17:20.354432 kubelet[1639]: I0212 19:17:20.354359 1639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:17:20.354432 kubelet[1639]: I0212 19:17:20.354387 1639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:17:20.354552 kubelet[1639]: I0212 19:17:20.354408 1639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:17:20.356843 kubelet[1639]: I0212 19:17:20.356731 1639 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:17:20.357105 kubelet[1639]: E0212 19:17:20.357072 1639 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Feb 12 19:17:20.488882 kubelet[1639]: E0212 19:17:20.488845 1639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:20.489494 env[1141]: time="2024-02-12T19:17:20.489458162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1e0b04285ffa2f5475654a163f591f39,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:20.492038 kubelet[1639]: E0212 19:17:20.492014 1639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:20.492522 env[1141]: time="2024-02-12T19:17:20.492488234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:20.494913 kubelet[1639]: E0212 19:17:20.494889 1639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:20.495221 env[1141]: time="2024-02-12T19:17:20.495185919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:20.654569 kubelet[1639]: E0212 19:17:20.654462 1639 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="800ms" Feb 12 19:17:20.758979 kubelet[1639]: I0212 19:17:20.758944 1639 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:17:20.759309 kubelet[1639]: E0212 19:17:20.759258 1639 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Feb 12 19:17:20.875564 kubelet[1639]: W0212 19:17:20.875480 1639 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.875564 kubelet[1639]: E0212 19:17:20.875546 1639 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.935145 kubelet[1639]: W0212 19:17:20.935011 1639 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:20.935145 kubelet[1639]: E0212 19:17:20.935072 1639 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:21.005017 kubelet[1639]: W0212 19:17:21.004947 1639 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:21.005017 kubelet[1639]: E0212 19:17:21.005016 1639 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:21.143673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321781424.mount: Deactivated successfully. Feb 12 19:17:21.150702 env[1141]: time="2024-02-12T19:17:21.150650612Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.155500 env[1141]: time="2024-02-12T19:17:21.155465294Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.157745 env[1141]: time="2024-02-12T19:17:21.157711486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.159422 env[1141]: time="2024-02-12T19:17:21.159390922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.161302 env[1141]: time="2024-02-12T19:17:21.161231768Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.163244 env[1141]: time="2024-02-12T19:17:21.163205258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.165033 env[1141]: time="2024-02-12T19:17:21.165005040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.166624 env[1141]: time="2024-02-12T19:17:21.166594698Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.169932 env[1141]: time="2024-02-12T19:17:21.169902411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.171845 env[1141]: time="2024-02-12T19:17:21.171802147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.173095 env[1141]: time="2024-02-12T19:17:21.173064258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.177395 env[1141]: time="2024-02-12T19:17:21.177340829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:21.209623 env[1141]: time="2024-02-12T19:17:21.209430473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:21.209623 env[1141]: time="2024-02-12T19:17:21.209473259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:21.209623 env[1141]: time="2024-02-12T19:17:21.209483354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:21.210172 env[1141]: time="2024-02-12T19:17:21.210122142Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/348135320485423bf9f4054fbc6944eb18051f2fb0430e94f178221c7e4afaff pid=1690 runtime=io.containerd.runc.v2 Feb 12 19:17:21.210696 env[1141]: time="2024-02-12T19:17:21.210159199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:21.210875 env[1141]: time="2024-02-12T19:17:21.210835685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:21.210991 env[1141]: time="2024-02-12T19:17:21.210957593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:21.211235 env[1141]: time="2024-02-12T19:17:21.211195000Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cafdfcbb4a2941081ceaa56fa80ff02b88a261b12694b208f0fc71deb011c7ea pid=1689 runtime=io.containerd.runc.v2 Feb 12 19:17:21.223742 systemd[1]: Started cri-containerd-348135320485423bf9f4054fbc6944eb18051f2fb0430e94f178221c7e4afaff.scope. Feb 12 19:17:21.225304 env[1141]: time="2024-02-12T19:17:21.225229334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:21.225433 env[1141]: time="2024-02-12T19:17:21.225356211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:21.225433 env[1141]: time="2024-02-12T19:17:21.225369792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:21.225676 env[1141]: time="2024-02-12T19:17:21.225612407Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1076f9386646c8c10128bf9bb214ad362991586a1d7d61b034bab4d07f061865 pid=1731 runtime=io.containerd.runc.v2 Feb 12 19:17:21.231038 systemd[1]: Started cri-containerd-cafdfcbb4a2941081ceaa56fa80ff02b88a261b12694b208f0fc71deb011c7ea.scope. Feb 12 19:17:21.246575 systemd[1]: Started cri-containerd-1076f9386646c8c10128bf9bb214ad362991586a1d7d61b034bab4d07f061865.scope. Feb 12 19:17:21.286851 env[1141]: time="2024-02-12T19:17:21.286379379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"348135320485423bf9f4054fbc6944eb18051f2fb0430e94f178221c7e4afaff\"" Feb 12 19:17:21.287823 kubelet[1639]: E0212 19:17:21.287796 1639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:21.290217 env[1141]: time="2024-02-12T19:17:21.290178732Z" level=info msg="CreateContainer within sandbox \"348135320485423bf9f4054fbc6944eb18051f2fb0430e94f178221c7e4afaff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:17:21.292992 env[1141]: time="2024-02-12T19:17:21.292948173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cafdfcbb4a2941081ceaa56fa80ff02b88a261b12694b208f0fc71deb011c7ea\"" Feb 12 19:17:21.293923 kubelet[1639]: E0212 19:17:21.293899 1639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:21.295780 env[1141]: time="2024-02-12T19:17:21.295744736Z" level=info msg="CreateContainer within sandbox \"cafdfcbb4a2941081ceaa56fa80ff02b88a261b12694b208f0fc71deb011c7ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:17:21.301024 env[1141]: time="2024-02-12T19:17:21.300982633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1e0b04285ffa2f5475654a163f591f39,Namespace:kube-system,Attempt:0,} returns sandbox id \"1076f9386646c8c10128bf9bb214ad362991586a1d7d61b034bab4d07f061865\"" Feb 12 19:17:21.301687 kubelet[1639]: E0212 19:17:21.301666 1639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:21.303585 env[1141]: time="2024-02-12T19:17:21.303547718Z" level=info msg="CreateContainer within sandbox \"1076f9386646c8c10128bf9bb214ad362991586a1d7d61b034bab4d07f061865\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:17:21.314184 env[1141]: time="2024-02-12T19:17:21.314118658Z" level=info msg="CreateContainer within sandbox \"348135320485423bf9f4054fbc6944eb18051f2fb0430e94f178221c7e4afaff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e12648119836561a77f4a10b0a646fa005cf68f0fb4645774d345ac346e0e6ee\"" Feb 12 19:17:21.314998 env[1141]: time="2024-02-12T19:17:21.314960159Z" level=info msg="StartContainer for \"e12648119836561a77f4a10b0a646fa005cf68f0fb4645774d345ac346e0e6ee\"" Feb 12 19:17:21.318390 env[1141]: time="2024-02-12T19:17:21.318348156Z" level=info msg="CreateContainer within sandbox \"cafdfcbb4a2941081ceaa56fa80ff02b88a261b12694b208f0fc71deb011c7ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"02a77f54207dcaf8abf1c472675bf76d8ef5e1d1b40c4acedd5d7340f5fffec9\"" Feb 12 19:17:21.319137 env[1141]: time="2024-02-12T19:17:21.319105287Z" level=info msg="StartContainer for \"02a77f54207dcaf8abf1c472675bf76d8ef5e1d1b40c4acedd5d7340f5fffec9\"" Feb 12 19:17:21.319521 env[1141]: time="2024-02-12T19:17:21.319487998Z" level=info msg="CreateContainer within sandbox \"1076f9386646c8c10128bf9bb214ad362991586a1d7d61b034bab4d07f061865\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d49755b425e0434fa32705c6e9cf9990c3c13b502d56f6b6d6d4e97cf0062538\"" Feb 12 19:17:21.319850 env[1141]: time="2024-02-12T19:17:21.319826401Z" level=info msg="StartContainer for \"d49755b425e0434fa32705c6e9cf9990c3c13b502d56f6b6d6d4e97cf0062538\"" Feb 12 19:17:21.334392 systemd[1]: Started cri-containerd-e12648119836561a77f4a10b0a646fa005cf68f0fb4645774d345ac346e0e6ee.scope. Feb 12 19:17:21.338852 systemd[1]: Started cri-containerd-02a77f54207dcaf8abf1c472675bf76d8ef5e1d1b40c4acedd5d7340f5fffec9.scope. Feb 12 19:17:21.342997 systemd[1]: Started cri-containerd-d49755b425e0434fa32705c6e9cf9990c3c13b502d56f6b6d6d4e97cf0062538.scope. Feb 12 19:17:21.396420 env[1141]: time="2024-02-12T19:17:21.395631340Z" level=info msg="StartContainer for \"e12648119836561a77f4a10b0a646fa005cf68f0fb4645774d345ac346e0e6ee\" returns successfully" Feb 12 19:17:21.396554 env[1141]: time="2024-02-12T19:17:21.396452850Z" level=info msg="StartContainer for \"d49755b425e0434fa32705c6e9cf9990c3c13b502d56f6b6d6d4e97cf0062538\" returns successfully" Feb 12 19:17:21.418750 env[1141]: time="2024-02-12T19:17:21.416023942Z" level=info msg="StartContainer for \"02a77f54207dcaf8abf1c472675bf76d8ef5e1d1b40c4acedd5d7340f5fffec9\" returns successfully" Feb 12 19:17:21.455865 kubelet[1639]: E0212 19:17:21.455804 1639 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="1.6s" Feb 12 19:17:21.546915 kubelet[1639]: W0212 19:17:21.546743 1639 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:21.546915 kubelet[1639]: E0212 19:17:21.546840 1639 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 12 19:17:21.561014 kubelet[1639]: I0212 19:17:21.560973 1639 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:17:21.561310 kubelet[1639]: E0212 19:17:21.561283 1639 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Feb 12 19:17:22.075511 kubelet[1639]: E0212 19:17:22.075486 1639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:22.077389 kubelet[1639]: E0212 19:17:22.077370 1639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:22.079067 kubelet[1639]: E0212 19:17:22.079047 1639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:23.080576 kubelet[1639]: E0212 19:17:23.080536 1639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:23.162598 kubelet[1639]: I0212 19:17:23.162562 1639 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:17:23.719295 kubelet[1639]: E0212 19:17:23.719263 1639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:23.777881 kubelet[1639]: E0212 19:17:23.777844 1639 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 12 19:17:23.834982 kubelet[1639]: I0212 19:17:23.834933 1639 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:17:24.049732 kubelet[1639]: I0212 19:17:24.049596 1639 apiserver.go:52] "Watching apiserver" Feb 12 19:17:24.053219 kubelet[1639]: I0212 19:17:24.053192 1639 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:17:26.458252 systemd[1]: Reloading. Feb 12 19:17:26.515083 /usr/lib/systemd/system-generators/torcx-generator[1936]: time="2024-02-12T19:17:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:17:26.515135 /usr/lib/systemd/system-generators/torcx-generator[1936]: time="2024-02-12T19:17:26Z" level=info msg="torcx already run" Feb 12 19:17:26.569156 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:17:26.569183 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:17:26.587437 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:17:26.677183 systemd[1]: Stopping kubelet.service... Feb 12 19:17:26.696215 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:17:26.696506 systemd[1]: Stopped kubelet.service. Feb 12 19:17:26.699080 systemd[1]: Started kubelet.service. Feb 12 19:17:26.744873 kubelet[1973]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:17:26.744873 kubelet[1973]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:17:26.744873 kubelet[1973]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:17:26.744873 kubelet[1973]: I0212 19:17:26.743871 1973 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:17:26.749034 kubelet[1973]: I0212 19:17:26.749001 1973 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 19:17:26.749034 kubelet[1973]: I0212 19:17:26.749028 1973 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:17:26.749224 kubelet[1973]: I0212 19:17:26.749208 1973 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 19:17:26.750736 kubelet[1973]: I0212 19:17:26.750705 1973 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:17:26.751790 kubelet[1973]: I0212 19:17:26.751761 1973 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:17:26.762901 kubelet[1973]: W0212 19:17:26.762883 1973 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:17:26.764141 kubelet[1973]: I0212 19:17:26.764122 1973 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:17:26.764489 kubelet[1973]: I0212 19:17:26.764477 1973 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:17:26.764740 kubelet[1973]: I0212 19:17:26.764720 1973 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 19:17:26.764873 kubelet[1973]: I0212 19:17:26.764853 1973 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 19:17:26.764873 kubelet[1973]: I0212 19:17:26.764875 1973 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 19:17:26.764948 kubelet[1973]: I0212 19:17:26.764938 1973 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:17:26.765044 kubelet[1973]: I0212 19:17:26.765034 1973 kubelet.go:393] "Attempting to sync node with API server" Feb 12 19:17:26.765072 kubelet[1973]: I0212 19:17:26.765052 1973 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:17:26.765663 kubelet[1973]: I0212 19:17:26.765569 1973 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:17:26.765663 kubelet[1973]: I0212 19:17:26.765597 1973 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:17:26.778554 kubelet[1973]: I0212 19:17:26.769137 1973 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:17:26.778554 kubelet[1973]: I0212 19:17:26.769756 1973 server.go:1232] "Started kubelet" Feb 12 19:17:26.778554 kubelet[1973]: I0212 19:17:26.770082 1973 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:17:26.778554 kubelet[1973]: I0212 19:17:26.770143 1973 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:17:26.778554 kubelet[1973]: I0212 19:17:26.770444 1973 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 19:17:26.778554 kubelet[1973]: I0212 19:17:26.770711 1973 server.go:462] "Adding debug handlers to kubelet server" Feb 12 19:17:26.778554 kubelet[1973]: E0212 19:17:26.772578 1973 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:17:26.778554 kubelet[1973]: E0212 19:17:26.772603 1973 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:17:26.778554 kubelet[1973]: I0212 19:17:26.772638 1973 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:17:26.780446 kubelet[1973]: I0212 19:17:26.780256 1973 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 19:17:26.780446 kubelet[1973]: I0212 19:17:26.780385 1973 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:17:26.780559 kubelet[1973]: I0212 19:17:26.780527 1973 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 19:17:26.799133 kubelet[1973]: I0212 19:17:26.799098 1973 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 19:17:26.800293 kubelet[1973]: I0212 19:17:26.800278 1973 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 19:17:26.800893 kubelet[1973]: I0212 19:17:26.800810 1973 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 19:17:26.801256 kubelet[1973]: I0212 19:17:26.801205 1973 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 19:17:26.801435 kubelet[1973]: E0212 19:17:26.801277 1973 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:17:26.835810 kubelet[1973]: I0212 19:17:26.835785 1973 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:17:26.835974 kubelet[1973]: I0212 19:17:26.835960 1973 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:17:26.836079 kubelet[1973]: I0212 19:17:26.836068 1973 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:17:26.836442 kubelet[1973]: I0212 19:17:26.836417 1973 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:17:26.836566 kubelet[1973]: I0212 19:17:26.836553 1973 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 12 19:17:26.836638 kubelet[1973]: I0212 19:17:26.836628 1973 policy_none.go:49] "None policy: Start" Feb 12 19:17:26.837529 kubelet[1973]: I0212 19:17:26.837499 1973 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:17:26.837639 kubelet[1973]: I0212 19:17:26.837627 1973 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:17:26.837812 kubelet[1973]: I0212 19:17:26.837799 1973 state_mem.go:75] "Updated machine memory state" Feb 12 19:17:26.839647 sudo[2005]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:17:26.839861 sudo[2005]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:17:26.842774 kubelet[1973]: I0212 19:17:26.842755 1973 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:17:26.843104 kubelet[1973]: I0212 19:17:26.843029 1973 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:17:26.884663 kubelet[1973]: I0212 19:17:26.884637 1973 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:17:26.893144 kubelet[1973]: I0212 19:17:26.893096 1973 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 19:17:26.893325 kubelet[1973]: I0212 19:17:26.893183 1973 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:17:26.901986 kubelet[1973]: I0212 19:17:26.901964 1973 topology_manager.go:215] "Topology Admit Handler" podUID="1e0b04285ffa2f5475654a163f591f39" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 12 19:17:26.902105 kubelet[1973]: I0212 19:17:26.902053 1973 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 12 19:17:26.902105 kubelet[1973]: I0212 19:17:26.902087 1973 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 12 19:17:27.084672 kubelet[1973]: I0212 19:17:27.082607 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e0b04285ffa2f5475654a163f591f39-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e0b04285ffa2f5475654a163f591f39\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:17:27.084672 kubelet[1973]: I0212 19:17:27.082680 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:17:27.084672 kubelet[1973]: I0212 19:17:27.082702 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:17:27.084672 kubelet[1973]: I0212 19:17:27.082724 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:17:27.084672 kubelet[1973]: I0212 19:17:27.082747 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:17:27.084938 kubelet[1973]: I0212 19:17:27.082764 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e0b04285ffa2f5475654a163f591f39-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e0b04285ffa2f5475654a163f591f39\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:17:27.084938 kubelet[1973]: I0212 19:17:27.082791 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e0b04285ffa2f5475654a163f591f39-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1e0b04285ffa2f5475654a163f591f39\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:17:27.084938 kubelet[1973]: I0212 19:17:27.082810 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:17:27.084938 kubelet[1973]: I0212 19:17:27.082957 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:17:27.211925 kubelet[1973]: E0212 19:17:27.211881 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:27.212360 kubelet[1973]: E0212 19:17:27.212341 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:27.212759 kubelet[1973]: E0212 19:17:27.212737 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:27.293214 sudo[2005]: pam_unix(sudo:session): session closed for user root Feb 12 19:17:27.766124 kubelet[1973]: I0212 19:17:27.766074 1973 apiserver.go:52] "Watching apiserver" Feb 12 19:17:27.780595 kubelet[1973]: I0212 19:17:27.780555 1973 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:17:27.815889 kubelet[1973]: E0212 19:17:27.815856 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:27.816900 kubelet[1973]: E0212 19:17:27.816874 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:27.822921 kubelet[1973]: E0212 19:17:27.822269 1973 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 19:17:27.822921 kubelet[1973]: E0212 19:17:27.822709 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:27.836338 kubelet[1973]: I0212 19:17:27.836288 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8362413819999999 podCreationTimestamp="2024-02-12 19:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:27.836052688 +0000 UTC m=+1.133902806" watchObservedRunningTime="2024-02-12 19:17:27.836241382 +0000 UTC m=+1.134091500" Feb 12 19:17:27.851499 kubelet[1973]: I0212 19:17:27.851462 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.851426645 podCreationTimestamp="2024-02-12 19:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:27.843933506 +0000 UTC m=+1.141783624" watchObservedRunningTime="2024-02-12 19:17:27.851426645 +0000 UTC m=+1.149276763" Feb 12 19:17:28.817202 kubelet[1973]: E0212 19:17:28.817165 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:29.804607 sudo[1235]: pam_unix(sudo:session): session closed for user root Feb 12 19:17:29.805985 sshd[1232]: pam_unix(sshd:session): session closed for user core Feb 12 19:17:29.808600 systemd[1]: sshd@4-10.0.0.60:22-10.0.0.1:53250.service: Deactivated successfully. Feb 12 19:17:29.809344 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:17:29.809525 systemd[1]: session-5.scope: Consumed 6.814s CPU time. Feb 12 19:17:29.809944 systemd-logind[1126]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:17:29.810580 systemd-logind[1126]: Removed session 5. Feb 12 19:17:34.702162 kubelet[1973]: E0212 19:17:34.702078 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:34.717952 kubelet[1973]: I0212 19:17:34.717922 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.717866541 podCreationTimestamp="2024-02-12 19:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:27.851562919 +0000 UTC m=+1.149413037" watchObservedRunningTime="2024-02-12 19:17:34.717866541 +0000 UTC m=+8.015716659" Feb 12 19:17:34.825439 kubelet[1973]: E0212 19:17:34.825116 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:36.340900 kubelet[1973]: E0212 19:17:36.340872 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:36.620850 kubelet[1973]: E0212 19:17:36.619992 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:36.828166 kubelet[1973]: E0212 19:17:36.828133 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:36.828343 kubelet[1973]: E0212 19:17:36.828310 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:39.969242 update_engine[1127]: I0212 19:17:39.968874 1127 update_attempter.cc:509] Updating boot flags... Feb 12 19:17:40.882189 kubelet[1973]: I0212 19:17:40.882154 1973 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:17:40.882533 env[1141]: time="2024-02-12T19:17:40.882470593Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:17:40.882699 kubelet[1973]: I0212 19:17:40.882623 1973 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:17:41.531682 kubelet[1973]: I0212 19:17:41.531639 1973 topology_manager.go:215] "Topology Admit Handler" podUID="78a1a7b7-b3c3-437d-aa74-e92ff9fc6893" podNamespace="kube-system" podName="kube-proxy-l2zsq" Feb 12 19:17:41.537653 systemd[1]: Created slice kubepods-besteffort-pod78a1a7b7_b3c3_437d_aa74_e92ff9fc6893.slice. Feb 12 19:17:41.538913 kubelet[1973]: I0212 19:17:41.538881 1973 topology_manager.go:215] "Topology Admit Handler" podUID="3e25e999-c607-4c7e-9400-b44195b742b4" podNamespace="kube-system" podName="cilium-9zctg" Feb 12 19:17:41.548977 systemd[1]: Created slice kubepods-burstable-pod3e25e999_c607_4c7e_9400_b44195b742b4.slice. Feb 12 19:17:41.597996 kubelet[1973]: I0212 19:17:41.597964 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-host-proc-sys-kernel\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.598110 kubelet[1973]: I0212 19:17:41.598016 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-cgroup\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.598110 kubelet[1973]: I0212 19:17:41.598040 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78a1a7b7-b3c3-437d-aa74-e92ff9fc6893-lib-modules\") pod \"kube-proxy-l2zsq\" (UID: \"78a1a7b7-b3c3-437d-aa74-e92ff9fc6893\") " pod="kube-system/kube-proxy-l2zsq" Feb 12 19:17:41.598110 kubelet[1973]: I0212 19:17:41.598080 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-run\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.598488 kubelet[1973]: I0212 19:17:41.598470 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68t6q\" (UniqueName: \"kubernetes.io/projected/78a1a7b7-b3c3-437d-aa74-e92ff9fc6893-kube-api-access-68t6q\") pod \"kube-proxy-l2zsq\" (UID: \"78a1a7b7-b3c3-437d-aa74-e92ff9fc6893\") " pod="kube-system/kube-proxy-l2zsq" Feb 12 19:17:41.598550 kubelet[1973]: I0212 19:17:41.598542 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-config-path\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.598708 kubelet[1973]: I0212 19:17:41.598690 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e25e999-c607-4c7e-9400-b44195b742b4-hubble-tls\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.598758 kubelet[1973]: I0212 19:17:41.598726 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cni-path\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.598792 kubelet[1973]: I0212 19:17:41.598764 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-lib-modules\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.598924 kubelet[1973]: I0212 19:17:41.598789 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-hostproc\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.599082 kubelet[1973]: I0212 19:17:41.599067 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78a1a7b7-b3c3-437d-aa74-e92ff9fc6893-xtables-lock\") pod \"kube-proxy-l2zsq\" (UID: \"78a1a7b7-b3c3-437d-aa74-e92ff9fc6893\") " pod="kube-system/kube-proxy-l2zsq" Feb 12 19:17:41.599129 kubelet[1973]: I0212 19:17:41.599106 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-etc-cni-netd\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.599291 kubelet[1973]: I0212 19:17:41.599264 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e25e999-c607-4c7e-9400-b44195b742b4-clustermesh-secrets\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.599630 kubelet[1973]: I0212 19:17:41.599608 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps7g2\" (UniqueName: \"kubernetes.io/projected/3e25e999-c607-4c7e-9400-b44195b742b4-kube-api-access-ps7g2\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.599737 kubelet[1973]: I0212 19:17:41.599719 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78a1a7b7-b3c3-437d-aa74-e92ff9fc6893-kube-proxy\") pod \"kube-proxy-l2zsq\" (UID: \"78a1a7b7-b3c3-437d-aa74-e92ff9fc6893\") " pod="kube-system/kube-proxy-l2zsq" Feb 12 19:17:41.599979 kubelet[1973]: I0212 19:17:41.599960 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-bpf-maps\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.600084 kubelet[1973]: I0212 19:17:41.600003 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-xtables-lock\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.600084 kubelet[1973]: I0212 19:17:41.600030 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-host-proc-sys-net\") pod \"cilium-9zctg\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " pod="kube-system/cilium-9zctg" Feb 12 19:17:41.845751 kubelet[1973]: E0212 19:17:41.845722 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:41.846479 env[1141]: time="2024-02-12T19:17:41.846422063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l2zsq,Uid:78a1a7b7-b3c3-437d-aa74-e92ff9fc6893,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:41.851667 kubelet[1973]: E0212 19:17:41.851637 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:41.852346 env[1141]: time="2024-02-12T19:17:41.852289841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zctg,Uid:3e25e999-c607-4c7e-9400-b44195b742b4,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:41.861105 env[1141]: time="2024-02-12T19:17:41.861026313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:41.861384 env[1141]: time="2024-02-12T19:17:41.861105075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:41.861384 env[1141]: time="2024-02-12T19:17:41.861132410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:41.861461 env[1141]: time="2024-02-12T19:17:41.861336519Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/242a888fb4d5d9a7416b55d3b980d84a9526283d234c21dfcf99165fab560802 pid=2080 runtime=io.containerd.runc.v2 Feb 12 19:17:41.888826 kubelet[1973]: I0212 19:17:41.887301 1973 topology_manager.go:215] "Topology Admit Handler" podUID="81ae9401-a8eb-42ac-8078-a8399ec03616" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-wtxfb" Feb 12 19:17:41.893409 env[1141]: time="2024-02-12T19:17:41.891445341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:41.893409 env[1141]: time="2024-02-12T19:17:41.892033856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:41.893409 env[1141]: time="2024-02-12T19:17:41.892047143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:41.893409 env[1141]: time="2024-02-12T19:17:41.893308537Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793 pid=2108 runtime=io.containerd.runc.v2 Feb 12 19:17:41.896300 systemd[1]: Created slice kubepods-besteffort-pod81ae9401_a8eb_42ac_8078_a8399ec03616.slice. Feb 12 19:17:41.907333 systemd[1]: Started cri-containerd-242a888fb4d5d9a7416b55d3b980d84a9526283d234c21dfcf99165fab560802.scope. Feb 12 19:17:41.924049 systemd[1]: Started cri-containerd-31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793.scope. Feb 12 19:17:41.944791 env[1141]: time="2024-02-12T19:17:41.944407345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l2zsq,Uid:78a1a7b7-b3c3-437d-aa74-e92ff9fc6893,Namespace:kube-system,Attempt:0,} returns sandbox id \"242a888fb4d5d9a7416b55d3b980d84a9526283d234c21dfcf99165fab560802\"" Feb 12 19:17:41.945345 kubelet[1973]: E0212 19:17:41.945319 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:41.949271 env[1141]: time="2024-02-12T19:17:41.949228123Z" level=info msg="CreateContainer within sandbox \"242a888fb4d5d9a7416b55d3b980d84a9526283d234c21dfcf99165fab560802\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:17:41.962162 env[1141]: time="2024-02-12T19:17:41.962121978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zctg,Uid:3e25e999-c607-4c7e-9400-b44195b742b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\"" Feb 12 19:17:41.963085 kubelet[1973]: E0212 19:17:41.963065 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:41.964283 env[1141]: time="2024-02-12T19:17:41.964251077Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:17:41.964908 env[1141]: time="2024-02-12T19:17:41.964869968Z" level=info msg="CreateContainer within sandbox \"242a888fb4d5d9a7416b55d3b980d84a9526283d234c21dfcf99165fab560802\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"38d3c04108c9ba1b632cab0ff4aa5bd04b38bcfd5dbdd82a6eadebfb58b41af1\"" Feb 12 19:17:41.965509 env[1141]: time="2024-02-12T19:17:41.965480014Z" level=info msg="StartContainer for \"38d3c04108c9ba1b632cab0ff4aa5bd04b38bcfd5dbdd82a6eadebfb58b41af1\"" Feb 12 19:17:41.982973 systemd[1]: Started cri-containerd-38d3c04108c9ba1b632cab0ff4aa5bd04b38bcfd5dbdd82a6eadebfb58b41af1.scope. Feb 12 19:17:42.003533 kubelet[1973]: I0212 19:17:42.003448 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81ae9401-a8eb-42ac-8078-a8399ec03616-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-wtxfb\" (UID: \"81ae9401-a8eb-42ac-8078-a8399ec03616\") " pod="kube-system/cilium-operator-6bc8ccdb58-wtxfb" Feb 12 19:17:42.003533 kubelet[1973]: I0212 19:17:42.003495 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqplx\" (UniqueName: \"kubernetes.io/projected/81ae9401-a8eb-42ac-8078-a8399ec03616-kube-api-access-mqplx\") pod \"cilium-operator-6bc8ccdb58-wtxfb\" (UID: \"81ae9401-a8eb-42ac-8078-a8399ec03616\") " pod="kube-system/cilium-operator-6bc8ccdb58-wtxfb" Feb 12 19:17:42.038505 env[1141]: time="2024-02-12T19:17:42.035140632Z" level=info msg="StartContainer for \"38d3c04108c9ba1b632cab0ff4aa5bd04b38bcfd5dbdd82a6eadebfb58b41af1\" returns successfully" Feb 12 19:17:42.205543 kubelet[1973]: E0212 19:17:42.205430 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:42.206165 env[1141]: time="2024-02-12T19:17:42.206077844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-wtxfb,Uid:81ae9401-a8eb-42ac-8078-a8399ec03616,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:42.218626 env[1141]: time="2024-02-12T19:17:42.218563883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:42.218728 env[1141]: time="2024-02-12T19:17:42.218638961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:42.218728 env[1141]: time="2024-02-12T19:17:42.218666415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:42.218854 env[1141]: time="2024-02-12T19:17:42.218812609Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9 pid=2227 runtime=io.containerd.runc.v2 Feb 12 19:17:42.228913 systemd[1]: Started cri-containerd-5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9.scope. Feb 12 19:17:42.267476 env[1141]: time="2024-02-12T19:17:42.267430288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-wtxfb,Uid:81ae9401-a8eb-42ac-8078-a8399ec03616,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9\"" Feb 12 19:17:42.268905 kubelet[1973]: E0212 19:17:42.268837 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:42.839876 kubelet[1973]: E0212 19:17:42.839845 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:42.849424 kubelet[1973]: I0212 19:17:42.849389 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-l2zsq" podStartSLOduration=1.84935304 podCreationTimestamp="2024-02-12 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:42.849179792 +0000 UTC m=+16.147029950" watchObservedRunningTime="2024-02-12 19:17:42.84935304 +0000 UTC m=+16.147203158" Feb 12 19:17:45.330240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092982859.mount: Deactivated successfully. Feb 12 19:17:47.583351 env[1141]: time="2024-02-12T19:17:47.583294121Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:47.584475 env[1141]: time="2024-02-12T19:17:47.584450707Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:47.585961 env[1141]: time="2024-02-12T19:17:47.585921061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:47.586650 env[1141]: time="2024-02-12T19:17:47.586618022Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 12 19:17:47.591282 env[1141]: time="2024-02-12T19:17:47.590958374Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:17:47.592651 env[1141]: time="2024-02-12T19:17:47.592184309Z" level=info msg="CreateContainer within sandbox \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:17:47.601590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754094661.mount: Deactivated successfully. Feb 12 19:17:47.605252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1062029731.mount: Deactivated successfully. Feb 12 19:17:47.607433 env[1141]: time="2024-02-12T19:17:47.607390807Z" level=info msg="CreateContainer within sandbox \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6\"" Feb 12 19:17:47.609705 env[1141]: time="2024-02-12T19:17:47.607947071Z" level=info msg="StartContainer for \"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6\"" Feb 12 19:17:47.627693 systemd[1]: Started cri-containerd-066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6.scope. Feb 12 19:17:47.677012 env[1141]: time="2024-02-12T19:17:47.676954244Z" level=info msg="StartContainer for \"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6\" returns successfully" Feb 12 19:17:47.727365 systemd[1]: cri-containerd-066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6.scope: Deactivated successfully. Feb 12 19:17:47.859475 kubelet[1973]: E0212 19:17:47.859285 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:47.871043 env[1141]: time="2024-02-12T19:17:47.870994845Z" level=info msg="shim disconnected" id=066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6 Feb 12 19:17:47.871255 env[1141]: time="2024-02-12T19:17:47.871235542Z" level=warning msg="cleaning up after shim disconnected" id=066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6 namespace=k8s.io Feb 12 19:17:47.871429 env[1141]: time="2024-02-12T19:17:47.871410132Z" level=info msg="cleaning up dead shim" Feb 12 19:17:47.879828 env[1141]: time="2024-02-12T19:17:47.879779791Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:17:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2400 runtime=io.containerd.runc.v2\n" Feb 12 19:17:48.600145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6-rootfs.mount: Deactivated successfully. Feb 12 19:17:48.859584 kubelet[1973]: E0212 19:17:48.859266 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:48.862210 env[1141]: time="2024-02-12T19:17:48.862112560Z" level=info msg="CreateContainer within sandbox \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:17:48.891112 env[1141]: time="2024-02-12T19:17:48.891054900Z" level=info msg="CreateContainer within sandbox \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4\"" Feb 12 19:17:48.891694 env[1141]: time="2024-02-12T19:17:48.891648289Z" level=info msg="StartContainer for \"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4\"" Feb 12 19:17:48.914501 systemd[1]: Started cri-containerd-a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4.scope. Feb 12 19:17:48.958365 env[1141]: time="2024-02-12T19:17:48.958253378Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:48.959563 env[1141]: time="2024-02-12T19:17:48.959533753Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:48.961321 env[1141]: time="2024-02-12T19:17:48.961290431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:48.961705 env[1141]: time="2024-02-12T19:17:48.961661534Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 12 19:17:48.965324 env[1141]: time="2024-02-12T19:17:48.965290136Z" level=info msg="CreateContainer within sandbox \"5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:17:48.977856 env[1141]: time="2024-02-12T19:17:48.976604187Z" level=info msg="StartContainer for \"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4\" returns successfully" Feb 12 19:17:48.979993 env[1141]: time="2024-02-12T19:17:48.979949799Z" level=info msg="CreateContainer within sandbox \"5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\"" Feb 12 19:17:48.980383 env[1141]: time="2024-02-12T19:17:48.980352875Z" level=info msg="StartContainer for \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\"" Feb 12 19:17:48.993248 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:17:48.993441 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:17:48.994198 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:17:48.995796 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:17:48.997101 systemd[1]: cri-containerd-a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4.scope: Deactivated successfully. Feb 12 19:17:49.004703 systemd[1]: Started cri-containerd-de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8.scope. Feb 12 19:17:49.006656 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:17:49.026208 env[1141]: time="2024-02-12T19:17:49.026151955Z" level=info msg="shim disconnected" id=a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4 Feb 12 19:17:49.026208 env[1141]: time="2024-02-12T19:17:49.026203894Z" level=warning msg="cleaning up after shim disconnected" id=a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4 namespace=k8s.io Feb 12 19:17:49.026208 env[1141]: time="2024-02-12T19:17:49.026213738Z" level=info msg="cleaning up dead shim" Feb 12 19:17:49.036929 env[1141]: time="2024-02-12T19:17:49.036879364Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:17:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2487 runtime=io.containerd.runc.v2\n" Feb 12 19:17:49.040220 env[1141]: time="2024-02-12T19:17:49.040185388Z" level=info msg="StartContainer for \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\" returns successfully" Feb 12 19:17:49.600621 systemd[1]: run-containerd-runc-k8s.io-a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4-runc.T2kus8.mount: Deactivated successfully. Feb 12 19:17:49.600726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4-rootfs.mount: Deactivated successfully. Feb 12 19:17:49.861709 kubelet[1973]: E0212 19:17:49.861610 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:49.864815 kubelet[1973]: E0212 19:17:49.864782 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:49.866698 env[1141]: time="2024-02-12T19:17:49.866662534Z" level=info msg="CreateContainer within sandbox \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:17:49.919547 kubelet[1973]: I0212 19:17:49.919511 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-wtxfb" podStartSLOduration=2.226683492 podCreationTimestamp="2024-02-12 19:17:41 +0000 UTC" firstStartedPulling="2024-02-12 19:17:42.269587027 +0000 UTC m=+15.567437145" lastFinishedPulling="2024-02-12 19:17:48.962377451 +0000 UTC m=+22.260227569" observedRunningTime="2024-02-12 19:17:49.918659614 +0000 UTC m=+23.216509732" watchObservedRunningTime="2024-02-12 19:17:49.919473916 +0000 UTC m=+23.217324034" Feb 12 19:17:49.956436 env[1141]: time="2024-02-12T19:17:49.956390776Z" level=info msg="CreateContainer within sandbox \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47\"" Feb 12 19:17:49.957085 env[1141]: time="2024-02-12T19:17:49.957061625Z" level=info msg="StartContainer for \"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47\"" Feb 12 19:17:49.975101 systemd[1]: Started cri-containerd-fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47.scope. Feb 12 19:17:50.018241 env[1141]: time="2024-02-12T19:17:50.018189779Z" level=info msg="StartContainer for \"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47\" returns successfully" Feb 12 19:17:50.031738 systemd[1]: cri-containerd-fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47.scope: Deactivated successfully. Feb 12 19:17:50.051211 env[1141]: time="2024-02-12T19:17:50.051162198Z" level=info msg="shim disconnected" id=fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47 Feb 12 19:17:50.051211 env[1141]: time="2024-02-12T19:17:50.051207174Z" level=warning msg="cleaning up after shim disconnected" id=fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47 namespace=k8s.io Feb 12 19:17:50.051211 env[1141]: time="2024-02-12T19:17:50.051216977Z" level=info msg="cleaning up dead shim" Feb 12 19:17:50.057498 env[1141]: time="2024-02-12T19:17:50.057450589Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:17:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2559 runtime=io.containerd.runc.v2\n" Feb 12 19:17:50.599848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47-rootfs.mount: Deactivated successfully. Feb 12 19:17:50.869186 kubelet[1973]: E0212 19:17:50.869085 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:50.869557 kubelet[1973]: E0212 19:17:50.869364 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:50.871649 env[1141]: time="2024-02-12T19:17:50.871601847Z" level=info msg="CreateContainer within sandbox \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:17:50.884186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4079841699.mount: Deactivated successfully. Feb 12 19:17:50.884380 env[1141]: time="2024-02-12T19:17:50.884305114Z" level=info msg="CreateContainer within sandbox \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04\"" Feb 12 19:17:50.884863 env[1141]: time="2024-02-12T19:17:50.884834742Z" level=info msg="StartContainer for \"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04\"" Feb 12 19:17:50.906959 systemd[1]: Started cri-containerd-671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04.scope. Feb 12 19:17:50.943399 systemd[1]: cri-containerd-671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04.scope: Deactivated successfully. Feb 12 19:17:50.947683 env[1141]: time="2024-02-12T19:17:50.947632022Z" level=info msg="StartContainer for \"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04\" returns successfully" Feb 12 19:17:50.967880 env[1141]: time="2024-02-12T19:17:50.967762604Z" level=info msg="shim disconnected" id=671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04 Feb 12 19:17:50.968056 env[1141]: time="2024-02-12T19:17:50.967882807Z" level=warning msg="cleaning up after shim disconnected" id=671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04 namespace=k8s.io Feb 12 19:17:50.968056 env[1141]: time="2024-02-12T19:17:50.967895172Z" level=info msg="cleaning up dead shim" Feb 12 19:17:50.973856 env[1141]: time="2024-02-12T19:17:50.973804748Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:17:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2613 runtime=io.containerd.runc.v2\n" Feb 12 19:17:51.599976 systemd[1]: run-containerd-runc-k8s.io-671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04-runc.lb7MV5.mount: Deactivated successfully. Feb 12 19:17:51.600096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04-rootfs.mount: Deactivated successfully. Feb 12 19:17:51.874141 kubelet[1973]: E0212 19:17:51.872979 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:51.875913 env[1141]: time="2024-02-12T19:17:51.875860867Z" level=info msg="CreateContainer within sandbox \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:17:51.892341 env[1141]: time="2024-02-12T19:17:51.892292142Z" level=info msg="CreateContainer within sandbox \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\"" Feb 12 19:17:51.893024 env[1141]: time="2024-02-12T19:17:51.892961010Z" level=info msg="StartContainer for \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\"" Feb 12 19:17:51.912007 systemd[1]: Started cri-containerd-0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7.scope. Feb 12 19:17:51.954842 env[1141]: time="2024-02-12T19:17:51.953701652Z" level=info msg="StartContainer for \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\" returns successfully" Feb 12 19:17:52.133026 kubelet[1973]: I0212 19:17:52.132875 1973 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:17:52.184204 kubelet[1973]: I0212 19:17:52.184152 1973 topology_manager.go:215] "Topology Admit Handler" podUID="d0626935-ed79-4789-82b4-ce233e8dc05d" podNamespace="kube-system" podName="coredns-5dd5756b68-hjbcm" Feb 12 19:17:52.187399 kubelet[1973]: I0212 19:17:52.187350 1973 topology_manager.go:215] "Topology Admit Handler" podUID="1bccf13d-548c-4280-9848-4fd7b72110d6" podNamespace="kube-system" podName="coredns-5dd5756b68-rfqbt" Feb 12 19:17:52.190092 systemd[1]: Created slice kubepods-burstable-podd0626935_ed79_4789_82b4_ce233e8dc05d.slice. Feb 12 19:17:52.194363 systemd[1]: Created slice kubepods-burstable-pod1bccf13d_548c_4280_9848_4fd7b72110d6.slice. Feb 12 19:17:52.274342 kubelet[1973]: I0212 19:17:52.274297 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bccf13d-548c-4280-9848-4fd7b72110d6-config-volume\") pod \"coredns-5dd5756b68-rfqbt\" (UID: \"1bccf13d-548c-4280-9848-4fd7b72110d6\") " pod="kube-system/coredns-5dd5756b68-rfqbt" Feb 12 19:17:52.274493 kubelet[1973]: I0212 19:17:52.274354 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0626935-ed79-4789-82b4-ce233e8dc05d-config-volume\") pod \"coredns-5dd5756b68-hjbcm\" (UID: \"d0626935-ed79-4789-82b4-ce233e8dc05d\") " pod="kube-system/coredns-5dd5756b68-hjbcm" Feb 12 19:17:52.274493 kubelet[1973]: I0212 19:17:52.274389 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cw45\" (UniqueName: \"kubernetes.io/projected/1bccf13d-548c-4280-9848-4fd7b72110d6-kube-api-access-8cw45\") pod \"coredns-5dd5756b68-rfqbt\" (UID: \"1bccf13d-548c-4280-9848-4fd7b72110d6\") " pod="kube-system/coredns-5dd5756b68-rfqbt" Feb 12 19:17:52.274493 kubelet[1973]: I0212 19:17:52.274415 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xffvq\" (UniqueName: \"kubernetes.io/projected/d0626935-ed79-4789-82b4-ce233e8dc05d-kube-api-access-xffvq\") pod \"coredns-5dd5756b68-hjbcm\" (UID: \"d0626935-ed79-4789-82b4-ce233e8dc05d\") " pod="kube-system/coredns-5dd5756b68-hjbcm" Feb 12 19:17:52.277843 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:17:52.492829 kubelet[1973]: E0212 19:17:52.492727 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:52.493399 env[1141]: time="2024-02-12T19:17:52.493360984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hjbcm,Uid:d0626935-ed79-4789-82b4-ce233e8dc05d,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:52.497730 kubelet[1973]: E0212 19:17:52.497699 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:52.498186 env[1141]: time="2024-02-12T19:17:52.498153391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rfqbt,Uid:1bccf13d-548c-4280-9848-4fd7b72110d6,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:52.503839 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:17:52.877325 kubelet[1973]: E0212 19:17:52.877277 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:52.893255 kubelet[1973]: I0212 19:17:52.893219 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9zctg" podStartSLOduration=6.267915704 podCreationTimestamp="2024-02-12 19:17:41 +0000 UTC" firstStartedPulling="2024-02-12 19:17:41.963711388 +0000 UTC m=+15.261561506" lastFinishedPulling="2024-02-12 19:17:47.588978135 +0000 UTC m=+20.886828253" observedRunningTime="2024-02-12 19:17:52.892206932 +0000 UTC m=+26.190057050" watchObservedRunningTime="2024-02-12 19:17:52.893182451 +0000 UTC m=+26.191032569" Feb 12 19:17:53.879210 kubelet[1973]: E0212 19:17:53.879172 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:54.153006 systemd-networkd[1055]: cilium_host: Link UP Feb 12 19:17:54.153873 systemd-networkd[1055]: cilium_net: Link UP Feb 12 19:17:54.154162 systemd-networkd[1055]: cilium_net: Gained carrier Feb 12 19:17:54.156061 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:17:54.156149 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:17:54.156266 systemd-networkd[1055]: cilium_host: Gained carrier Feb 12 19:17:54.186084 systemd-networkd[1055]: cilium_host: Gained IPv6LL Feb 12 19:17:54.241425 systemd-networkd[1055]: cilium_vxlan: Link UP Feb 12 19:17:54.241431 systemd-networkd[1055]: cilium_vxlan: Gained carrier Feb 12 19:17:54.279201 systemd-networkd[1055]: cilium_net: Gained IPv6LL Feb 12 19:17:54.538850 kernel: NET: Registered PF_ALG protocol family Feb 12 19:17:54.881226 kubelet[1973]: E0212 19:17:54.881185 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:55.143576 systemd-networkd[1055]: lxc_health: Link UP Feb 12 19:17:55.155559 systemd-networkd[1055]: lxc_health: Gained carrier Feb 12 19:17:55.155931 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:17:55.572453 systemd-networkd[1055]: lxc04175cb54af1: Link UP Feb 12 19:17:55.581853 kernel: eth0: renamed from tmp6080f Feb 12 19:17:55.597087 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc04175cb54af1: link becomes ready Feb 12 19:17:55.595764 systemd-networkd[1055]: lxc04175cb54af1: Gained carrier Feb 12 19:17:55.596506 systemd-networkd[1055]: lxc6cdbedb3fc97: Link UP Feb 12 19:17:55.603849 kernel: eth0: renamed from tmpcb4ff Feb 12 19:17:55.608262 systemd-networkd[1055]: lxc6cdbedb3fc97: Gained carrier Feb 12 19:17:55.608874 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6cdbedb3fc97: link becomes ready Feb 12 19:17:55.871975 systemd-networkd[1055]: cilium_vxlan: Gained IPv6LL Feb 12 19:17:55.882708 kubelet[1973]: E0212 19:17:55.882511 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:56.309324 systemd[1]: Started sshd@5-10.0.0.60:22-10.0.0.1:50570.service. Feb 12 19:17:56.350120 sshd[3151]: Accepted publickey for core from 10.0.0.1 port 50570 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:17:56.351728 sshd[3151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:17:56.355705 systemd-logind[1126]: New session 6 of user core. Feb 12 19:17:56.357067 systemd[1]: Started session-6.scope. Feb 12 19:17:56.530484 sshd[3151]: pam_unix(sshd:session): session closed for user core Feb 12 19:17:56.533040 systemd[1]: sshd@5-10.0.0.60:22-10.0.0.1:50570.service: Deactivated successfully. Feb 12 19:17:56.533839 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:17:56.534561 systemd-logind[1126]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:17:56.535263 systemd-logind[1126]: Removed session 6. Feb 12 19:17:56.703968 systemd-networkd[1055]: lxc_health: Gained IPv6LL Feb 12 19:17:56.895984 systemd-networkd[1055]: lxc04175cb54af1: Gained IPv6LL Feb 12 19:17:57.279259 systemd-networkd[1055]: lxc6cdbedb3fc97: Gained IPv6LL Feb 12 19:17:59.314179 env[1141]: time="2024-02-12T19:17:59.313999341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:59.314179 env[1141]: time="2024-02-12T19:17:59.314043192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:59.314179 env[1141]: time="2024-02-12T19:17:59.314053474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:59.314560 env[1141]: time="2024-02-12T19:17:59.314213595Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6080ffe7859bcd317360cc89bb488dbe2419357899c5ca824f1907cd7988d82d pid=3183 runtime=io.containerd.runc.v2 Feb 12 19:17:59.325945 env[1141]: time="2024-02-12T19:17:59.325188824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:59.325945 env[1141]: time="2024-02-12T19:17:59.325238516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:59.325945 env[1141]: time="2024-02-12T19:17:59.325248999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:59.325945 env[1141]: time="2024-02-12T19:17:59.325375671Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb4ffe5b712cef96effea1f0e29d22f9ac29fd07ac978d1184866e8c914630bf pid=3201 runtime=io.containerd.runc.v2 Feb 12 19:17:59.328886 systemd[1]: run-containerd-runc-k8s.io-6080ffe7859bcd317360cc89bb488dbe2419357899c5ca824f1907cd7988d82d-runc.jbiR60.mount: Deactivated successfully. Feb 12 19:17:59.330573 systemd[1]: Started cri-containerd-6080ffe7859bcd317360cc89bb488dbe2419357899c5ca824f1907cd7988d82d.scope. Feb 12 19:17:59.342916 systemd[1]: Started cri-containerd-cb4ffe5b712cef96effea1f0e29d22f9ac29fd07ac978d1184866e8c914630bf.scope. Feb 12 19:17:59.371733 systemd-resolved[1086]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:17:59.375166 systemd-resolved[1086]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:17:59.394239 env[1141]: time="2024-02-12T19:17:59.394190515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rfqbt,Uid:1bccf13d-548c-4280-9848-4fd7b72110d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb4ffe5b712cef96effea1f0e29d22f9ac29fd07ac978d1184866e8c914630bf\"" Feb 12 19:17:59.395654 kubelet[1973]: E0212 19:17:59.395634 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:59.396538 env[1141]: time="2024-02-12T19:17:59.396505383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hjbcm,Uid:d0626935-ed79-4789-82b4-ce233e8dc05d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6080ffe7859bcd317360cc89bb488dbe2419357899c5ca824f1907cd7988d82d\"" Feb 12 19:17:59.397412 kubelet[1973]: E0212 19:17:59.397391 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:59.399013 env[1141]: time="2024-02-12T19:17:59.398974611Z" level=info msg="CreateContainer within sandbox \"cb4ffe5b712cef96effea1f0e29d22f9ac29fd07ac978d1184866e8c914630bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:17:59.402080 env[1141]: time="2024-02-12T19:17:59.401792807Z" level=info msg="CreateContainer within sandbox \"6080ffe7859bcd317360cc89bb488dbe2419357899c5ca824f1907cd7988d82d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:17:59.415253 env[1141]: time="2024-02-12T19:17:59.415192771Z" level=info msg="CreateContainer within sandbox \"cb4ffe5b712cef96effea1f0e29d22f9ac29fd07ac978d1184866e8c914630bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1bb4b8644d114ba80a6da6d6fde13c00e902df5532aa01300172e4a1ceb6a7d\"" Feb 12 19:17:59.417538 env[1141]: time="2024-02-12T19:17:59.415998136Z" level=info msg="StartContainer for \"b1bb4b8644d114ba80a6da6d6fde13c00e902df5532aa01300172e4a1ceb6a7d\"" Feb 12 19:17:59.420790 env[1141]: time="2024-02-12T19:17:59.420748903Z" level=info msg="CreateContainer within sandbox \"6080ffe7859bcd317360cc89bb488dbe2419357899c5ca824f1907cd7988d82d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdab313fc2bc70f20baf24211427c6b233518e73c7e05ed477ed9e21170dd8f6\"" Feb 12 19:17:59.421375 env[1141]: time="2024-02-12T19:17:59.421269435Z" level=info msg="StartContainer for \"fdab313fc2bc70f20baf24211427c6b233518e73c7e05ed477ed9e21170dd8f6\"" Feb 12 19:17:59.432908 systemd[1]: Started cri-containerd-b1bb4b8644d114ba80a6da6d6fde13c00e902df5532aa01300172e4a1ceb6a7d.scope. Feb 12 19:17:59.447408 systemd[1]: Started cri-containerd-fdab313fc2bc70f20baf24211427c6b233518e73c7e05ed477ed9e21170dd8f6.scope. Feb 12 19:17:59.496619 env[1141]: time="2024-02-12T19:17:59.496561885Z" level=info msg="StartContainer for \"fdab313fc2bc70f20baf24211427c6b233518e73c7e05ed477ed9e21170dd8f6\" returns successfully" Feb 12 19:17:59.496969 env[1141]: time="2024-02-12T19:17:59.496941381Z" level=info msg="StartContainer for \"b1bb4b8644d114ba80a6da6d6fde13c00e902df5532aa01300172e4a1ceb6a7d\" returns successfully" Feb 12 19:17:59.890285 kubelet[1973]: E0212 19:17:59.890240 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:59.892989 kubelet[1973]: E0212 19:17:59.892959 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:17:59.900913 kubelet[1973]: I0212 19:17:59.900875 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rfqbt" podStartSLOduration=18.900840481 podCreationTimestamp="2024-02-12 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:59.89981014 +0000 UTC m=+33.197660298" watchObservedRunningTime="2024-02-12 19:17:59.900840481 +0000 UTC m=+33.198690599" Feb 12 19:17:59.923331 kubelet[1973]: I0212 19:17:59.923285 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hjbcm" podStartSLOduration=18.923241373 podCreationTimestamp="2024-02-12 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:59.911985673 +0000 UTC m=+33.209835791" watchObservedRunningTime="2024-02-12 19:17:59.923241373 +0000 UTC m=+33.221091491" Feb 12 19:18:00.317969 systemd[1]: run-containerd-runc-k8s.io-cb4ffe5b712cef96effea1f0e29d22f9ac29fd07ac978d1184866e8c914630bf-runc.MHY9Dw.mount: Deactivated successfully. Feb 12 19:18:00.894723 kubelet[1973]: E0212 19:18:00.894677 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:00.895364 kubelet[1973]: E0212 19:18:00.895343 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:01.536445 systemd[1]: Started sshd@6-10.0.0.60:22-10.0.0.1:50572.service. Feb 12 19:18:01.572302 sshd[3344]: Accepted publickey for core from 10.0.0.1 port 50572 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:01.573742 sshd[3344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:01.577585 systemd-logind[1126]: New session 7 of user core. Feb 12 19:18:01.578098 systemd[1]: Started session-7.scope. Feb 12 19:18:01.693527 sshd[3344]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:01.696245 systemd[1]: sshd@6-10.0.0.60:22-10.0.0.1:50572.service: Deactivated successfully. Feb 12 19:18:01.697018 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:18:01.697594 systemd-logind[1126]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:18:01.698409 systemd-logind[1126]: Removed session 7. Feb 12 19:18:01.896195 kubelet[1973]: E0212 19:18:01.896158 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:01.897436 kubelet[1973]: E0212 19:18:01.896233 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:01.909226 kubelet[1973]: I0212 19:18:01.909180 1973 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:18:01.910036 kubelet[1973]: E0212 19:18:01.910009 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:02.898784 kubelet[1973]: E0212 19:18:02.898738 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:06.699801 systemd[1]: Started sshd@7-10.0.0.60:22-10.0.0.1:34444.service. Feb 12 19:18:06.733960 sshd[3358]: Accepted publickey for core from 10.0.0.1 port 34444 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:06.735171 sshd[3358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:06.741562 systemd-logind[1126]: New session 8 of user core. Feb 12 19:18:06.742409 systemd[1]: Started session-8.scope. Feb 12 19:18:06.878606 sshd[3358]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:06.881929 systemd[1]: sshd@7-10.0.0.60:22-10.0.0.1:34444.service: Deactivated successfully. Feb 12 19:18:06.882760 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:18:06.883305 systemd-logind[1126]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:18:06.884177 systemd-logind[1126]: Removed session 8. Feb 12 19:18:11.883109 systemd[1]: Started sshd@8-10.0.0.60:22-10.0.0.1:34448.service. Feb 12 19:18:11.918507 sshd[3374]: Accepted publickey for core from 10.0.0.1 port 34448 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:11.919674 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:11.923524 systemd-logind[1126]: New session 9 of user core. Feb 12 19:18:11.923990 systemd[1]: Started session-9.scope. Feb 12 19:18:12.044523 sshd[3374]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:12.048054 systemd[1]: Started sshd@9-10.0.0.60:22-10.0.0.1:34454.service. Feb 12 19:18:12.048593 systemd[1]: sshd@8-10.0.0.60:22-10.0.0.1:34448.service: Deactivated successfully. Feb 12 19:18:12.049259 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:18:12.049780 systemd-logind[1126]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:18:12.050731 systemd-logind[1126]: Removed session 9. Feb 12 19:18:12.082562 sshd[3388]: Accepted publickey for core from 10.0.0.1 port 34454 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:12.083958 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:12.087334 systemd-logind[1126]: New session 10 of user core. Feb 12 19:18:12.088237 systemd[1]: Started session-10.scope. Feb 12 19:18:12.843860 systemd[1]: Started sshd@10-10.0.0.60:22-10.0.0.1:37954.service. Feb 12 19:18:12.846695 sshd[3388]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:12.849350 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:18:12.850683 systemd[1]: sshd@9-10.0.0.60:22-10.0.0.1:34454.service: Deactivated successfully. Feb 12 19:18:12.851780 systemd-logind[1126]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:18:12.852599 systemd-logind[1126]: Removed session 10. Feb 12 19:18:12.898729 sshd[3402]: Accepted publickey for core from 10.0.0.1 port 37954 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:12.900364 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:12.904215 systemd-logind[1126]: New session 11 of user core. Feb 12 19:18:12.905106 systemd[1]: Started session-11.scope. Feb 12 19:18:13.024100 sshd[3402]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:13.026329 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:18:13.026915 systemd-logind[1126]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:18:13.027022 systemd[1]: sshd@10-10.0.0.60:22-10.0.0.1:37954.service: Deactivated successfully. Feb 12 19:18:13.027958 systemd-logind[1126]: Removed session 11. Feb 12 19:18:18.029509 systemd[1]: Started sshd@11-10.0.0.60:22-10.0.0.1:37968.service. Feb 12 19:18:18.066973 sshd[3417]: Accepted publickey for core from 10.0.0.1 port 37968 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:18.068353 sshd[3417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:18.073744 systemd-logind[1126]: New session 12 of user core. Feb 12 19:18:18.073994 systemd[1]: Started session-12.scope. Feb 12 19:18:18.193807 sshd[3417]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:18.200211 systemd[1]: sshd@11-10.0.0.60:22-10.0.0.1:37968.service: Deactivated successfully. Feb 12 19:18:18.200874 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:18:18.201567 systemd-logind[1126]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:18:18.203457 systemd[1]: Started sshd@12-10.0.0.60:22-10.0.0.1:37976.service. Feb 12 19:18:18.204165 systemd-logind[1126]: Removed session 12. Feb 12 19:18:18.238208 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 37976 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:18.239883 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:18.244249 systemd-logind[1126]: New session 13 of user core. Feb 12 19:18:18.244326 systemd[1]: Started session-13.scope. Feb 12 19:18:18.482027 sshd[3430]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:18.485028 systemd[1]: sshd@12-10.0.0.60:22-10.0.0.1:37976.service: Deactivated successfully. Feb 12 19:18:18.485729 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:18:18.486368 systemd-logind[1126]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:18:18.487584 systemd[1]: Started sshd@13-10.0.0.60:22-10.0.0.1:37980.service. Feb 12 19:18:18.488381 systemd-logind[1126]: Removed session 13. Feb 12 19:18:18.525401 sshd[3441]: Accepted publickey for core from 10.0.0.1 port 37980 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:18.526654 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:18.530348 systemd-logind[1126]: New session 14 of user core. Feb 12 19:18:18.531235 systemd[1]: Started session-14.scope. Feb 12 19:18:19.242787 sshd[3441]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:19.247197 systemd[1]: Started sshd@14-10.0.0.60:22-10.0.0.1:37982.service. Feb 12 19:18:19.247721 systemd[1]: sshd@13-10.0.0.60:22-10.0.0.1:37980.service: Deactivated successfully. Feb 12 19:18:19.248556 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:18:19.250132 systemd-logind[1126]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:18:19.251349 systemd-logind[1126]: Removed session 14. Feb 12 19:18:19.284303 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 37982 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:19.285591 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:19.288962 systemd-logind[1126]: New session 15 of user core. Feb 12 19:18:19.289794 systemd[1]: Started session-15.scope. Feb 12 19:18:19.569359 sshd[3459]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:19.572421 systemd[1]: sshd@14-10.0.0.60:22-10.0.0.1:37982.service: Deactivated successfully. Feb 12 19:18:19.573165 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:18:19.573730 systemd-logind[1126]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:18:19.575177 systemd[1]: Started sshd@15-10.0.0.60:22-10.0.0.1:37994.service. Feb 12 19:18:19.575873 systemd-logind[1126]: Removed session 15. Feb 12 19:18:19.612638 sshd[3471]: Accepted publickey for core from 10.0.0.1 port 37994 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:19.614140 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:19.618275 systemd-logind[1126]: New session 16 of user core. Feb 12 19:18:19.619195 systemd[1]: Started session-16.scope. Feb 12 19:18:19.791209 sshd[3471]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:19.793747 systemd-logind[1126]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:18:19.793990 systemd[1]: sshd@15-10.0.0.60:22-10.0.0.1:37994.service: Deactivated successfully. Feb 12 19:18:19.794875 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:18:19.795653 systemd-logind[1126]: Removed session 16. Feb 12 19:18:24.796930 systemd[1]: Started sshd@16-10.0.0.60:22-10.0.0.1:40226.service. Feb 12 19:18:24.836847 sshd[3487]: Accepted publickey for core from 10.0.0.1 port 40226 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:24.837280 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:24.840977 systemd-logind[1126]: New session 17 of user core. Feb 12 19:18:24.841477 systemd[1]: Started session-17.scope. Feb 12 19:18:24.960193 sshd[3487]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:24.963095 systemd[1]: sshd@16-10.0.0.60:22-10.0.0.1:40226.service: Deactivated successfully. Feb 12 19:18:24.963953 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:18:24.964460 systemd-logind[1126]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:18:24.965172 systemd-logind[1126]: Removed session 17. Feb 12 19:18:29.966070 systemd[1]: Started sshd@17-10.0.0.60:22-10.0.0.1:40240.service. Feb 12 19:18:30.006919 sshd[3504]: Accepted publickey for core from 10.0.0.1 port 40240 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:30.008401 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:30.012340 systemd-logind[1126]: New session 18 of user core. Feb 12 19:18:30.013326 systemd[1]: Started session-18.scope. Feb 12 19:18:30.135510 sshd[3504]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:30.138252 systemd[1]: sshd@17-10.0.0.60:22-10.0.0.1:40240.service: Deactivated successfully. Feb 12 19:18:30.139115 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:18:30.139658 systemd-logind[1126]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:18:30.140392 systemd-logind[1126]: Removed session 18. Feb 12 19:18:35.139923 systemd[1]: Started sshd@18-10.0.0.60:22-10.0.0.1:45074.service. Feb 12 19:18:35.174621 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 45074 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:35.176145 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:35.179444 systemd-logind[1126]: New session 19 of user core. Feb 12 19:18:35.180294 systemd[1]: Started session-19.scope. Feb 12 19:18:35.299909 sshd[3517]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:35.304284 systemd[1]: sshd@18-10.0.0.60:22-10.0.0.1:45074.service: Deactivated successfully. Feb 12 19:18:35.305182 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:18:35.305760 systemd-logind[1126]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:18:35.306389 systemd-logind[1126]: Removed session 19. Feb 12 19:18:37.802889 kubelet[1973]: E0212 19:18:37.802850 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:39.802485 kubelet[1973]: E0212 19:18:39.802448 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:40.303655 systemd[1]: Started sshd@19-10.0.0.60:22-10.0.0.1:45090.service. Feb 12 19:18:40.339126 sshd[3530]: Accepted publickey for core from 10.0.0.1 port 45090 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:40.340309 sshd[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:40.343709 systemd-logind[1126]: New session 20 of user core. Feb 12 19:18:40.344567 systemd[1]: Started session-20.scope. Feb 12 19:18:40.472950 sshd[3530]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:40.476709 systemd[1]: Started sshd@20-10.0.0.60:22-10.0.0.1:45094.service. Feb 12 19:18:40.477387 systemd[1]: sshd@19-10.0.0.60:22-10.0.0.1:45090.service: Deactivated successfully. Feb 12 19:18:40.478257 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:18:40.478903 systemd-logind[1126]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:18:40.479899 systemd-logind[1126]: Removed session 20. Feb 12 19:18:40.514137 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 45094 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:40.515421 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:40.519878 systemd-logind[1126]: New session 21 of user core. Feb 12 19:18:40.520566 systemd[1]: Started session-21.scope. Feb 12 19:18:42.635680 env[1141]: time="2024-02-12T19:18:42.635636588Z" level=info msg="StopContainer for \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\" with timeout 30 (s)" Feb 12 19:18:42.636483 env[1141]: time="2024-02-12T19:18:42.636409684Z" level=info msg="Stop container \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\" with signal terminated" Feb 12 19:18:42.652342 systemd[1]: run-containerd-runc-k8s.io-0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7-runc.UJj24f.mount: Deactivated successfully. Feb 12 19:18:42.653289 systemd[1]: cri-containerd-de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8.scope: Deactivated successfully. Feb 12 19:18:42.673299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8-rootfs.mount: Deactivated successfully. Feb 12 19:18:42.674587 env[1141]: time="2024-02-12T19:18:42.674402721Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:18:42.681314 env[1141]: time="2024-02-12T19:18:42.681259318Z" level=info msg="StopContainer for \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\" with timeout 2 (s)" Feb 12 19:18:42.683196 env[1141]: time="2024-02-12T19:18:42.683160162Z" level=info msg="Stop container \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\" with signal terminated" Feb 12 19:18:42.688896 systemd-networkd[1055]: lxc_health: Link DOWN Feb 12 19:18:42.689160 systemd-networkd[1055]: lxc_health: Lost carrier Feb 12 19:18:42.692231 env[1141]: time="2024-02-12T19:18:42.692183780Z" level=info msg="shim disconnected" id=de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8 Feb 12 19:18:42.692231 env[1141]: time="2024-02-12T19:18:42.692227657Z" level=warning msg="cleaning up after shim disconnected" id=de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8 namespace=k8s.io Feb 12 19:18:42.692356 env[1141]: time="2024-02-12T19:18:42.692237336Z" level=info msg="cleaning up dead shim" Feb 12 19:18:42.698517 env[1141]: time="2024-02-12T19:18:42.698475463Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3597 runtime=io.containerd.runc.v2\n" Feb 12 19:18:42.701077 env[1141]: time="2024-02-12T19:18:42.701007335Z" level=info msg="StopContainer for \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\" returns successfully" Feb 12 19:18:42.703263 env[1141]: time="2024-02-12T19:18:42.703227273Z" level=info msg="StopPodSandbox for \"5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9\"" Feb 12 19:18:42.703990 env[1141]: time="2024-02-12T19:18:42.703934734Z" level=info msg="Container to stop \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:18:42.705325 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9-shm.mount: Deactivated successfully. Feb 12 19:18:42.715573 systemd[1]: cri-containerd-5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9.scope: Deactivated successfully. Feb 12 19:18:42.729220 systemd[1]: cri-containerd-0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7.scope: Deactivated successfully. Feb 12 19:18:42.729520 systemd[1]: cri-containerd-0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7.scope: Consumed 6.738s CPU time. Feb 12 19:18:42.735992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9-rootfs.mount: Deactivated successfully. Feb 12 19:18:42.740740 env[1141]: time="2024-02-12T19:18:42.740683994Z" level=info msg="shim disconnected" id=5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9 Feb 12 19:18:42.740740 env[1141]: time="2024-02-12T19:18:42.740740989Z" level=warning msg="cleaning up after shim disconnected" id=5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9 namespace=k8s.io Feb 12 19:18:42.740942 env[1141]: time="2024-02-12T19:18:42.740750309Z" level=info msg="cleaning up dead shim" Feb 12 19:18:42.748330 env[1141]: time="2024-02-12T19:18:42.748207616Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3638 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:18:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 12 19:18:42.748721 env[1141]: time="2024-02-12T19:18:42.748692376Z" level=info msg="TearDown network for sandbox \"5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9\" successfully" Feb 12 19:18:42.748967 env[1141]: time="2024-02-12T19:18:42.748759090Z" level=info msg="StopPodSandbox for \"5fea8390db5ba96c4bd4309f823ab8b85aa404e9df892ae2e06a1c2d3d14fbe9\" returns successfully" Feb 12 19:18:42.751938 env[1141]: time="2024-02-12T19:18:42.751895273Z" level=info msg="shim disconnected" id=0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7 Feb 12 19:18:42.751938 env[1141]: time="2024-02-12T19:18:42.751939389Z" level=warning msg="cleaning up after shim disconnected" id=0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7 namespace=k8s.io Feb 12 19:18:42.752086 env[1141]: time="2024-02-12T19:18:42.751949708Z" level=info msg="cleaning up dead shim" Feb 12 19:18:42.760395 env[1141]: time="2024-02-12T19:18:42.760357217Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3656 runtime=io.containerd.runc.v2\n" Feb 12 19:18:42.763630 env[1141]: time="2024-02-12T19:18:42.763582992Z" level=info msg="StopContainer for \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\" returns successfully" Feb 12 19:18:42.764026 env[1141]: time="2024-02-12T19:18:42.764004397Z" level=info msg="StopPodSandbox for \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\"" Feb 12 19:18:42.764080 env[1141]: time="2024-02-12T19:18:42.764057993Z" level=info msg="Container to stop \"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:18:42.764080 env[1141]: time="2024-02-12T19:18:42.764074472Z" level=info msg="Container to stop \"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:18:42.764138 env[1141]: time="2024-02-12T19:18:42.764088751Z" level=info msg="Container to stop \"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:18:42.764138 env[1141]: time="2024-02-12T19:18:42.764102669Z" level=info msg="Container to stop \"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:18:42.764138 env[1141]: time="2024-02-12T19:18:42.764113988Z" level=info msg="Container to stop \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:18:42.769142 systemd[1]: cri-containerd-31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793.scope: Deactivated successfully. Feb 12 19:18:42.795444 env[1141]: time="2024-02-12T19:18:42.795399377Z" level=info msg="shim disconnected" id=31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793 Feb 12 19:18:42.795642 env[1141]: time="2024-02-12T19:18:42.795624679Z" level=warning msg="cleaning up after shim disconnected" id=31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793 namespace=k8s.io Feb 12 19:18:42.795699 env[1141]: time="2024-02-12T19:18:42.795687074Z" level=info msg="cleaning up dead shim" Feb 12 19:18:42.804969 env[1141]: time="2024-02-12T19:18:42.804933194Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3687 runtime=io.containerd.runc.v2\n" Feb 12 19:18:42.805481 env[1141]: time="2024-02-12T19:18:42.805450791Z" level=info msg="TearDown network for sandbox \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" successfully" Feb 12 19:18:42.805571 env[1141]: time="2024-02-12T19:18:42.805553223Z" level=info msg="StopPodSandbox for \"31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793\" returns successfully" Feb 12 19:18:42.890523 kubelet[1973]: I0212 19:18:42.890404 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-host-proc-sys-kernel\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.890523 kubelet[1973]: I0212 19:18:42.890453 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e25e999-c607-4c7e-9400-b44195b742b4-clustermesh-secrets\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.890523 kubelet[1973]: I0212 19:18:42.890476 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-bpf-maps\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.890523 kubelet[1973]: I0212 19:18:42.890494 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-xtables-lock\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.891442 kubelet[1973]: I0212 19:18:42.891060 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-host-proc-sys-net\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.891442 kubelet[1973]: I0212 19:18:42.891108 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps7g2\" (UniqueName: \"kubernetes.io/projected/3e25e999-c607-4c7e-9400-b44195b742b4-kube-api-access-ps7g2\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.891442 kubelet[1973]: I0212 19:18:42.891160 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-lib-modules\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.891442 kubelet[1973]: I0212 19:18:42.891195 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-config-path\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.891442 kubelet[1973]: I0212 19:18:42.891214 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-hostproc\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.891442 kubelet[1973]: I0212 19:18:42.891231 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-etc-cni-netd\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.891638 kubelet[1973]: I0212 19:18:42.891252 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81ae9401-a8eb-42ac-8078-a8399ec03616-cilium-config-path\") pod \"81ae9401-a8eb-42ac-8078-a8399ec03616\" (UID: \"81ae9401-a8eb-42ac-8078-a8399ec03616\") " Feb 12 19:18:42.891638 kubelet[1973]: I0212 19:18:42.891273 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e25e999-c607-4c7e-9400-b44195b742b4-hubble-tls\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.891638 kubelet[1973]: I0212 19:18:42.891290 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cni-path\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.891638 kubelet[1973]: I0212 19:18:42.891307 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-cgroup\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.891638 kubelet[1973]: I0212 19:18:42.891324 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-run\") pod \"3e25e999-c607-4c7e-9400-b44195b742b4\" (UID: \"3e25e999-c607-4c7e-9400-b44195b742b4\") " Feb 12 19:18:42.891638 kubelet[1973]: I0212 19:18:42.891343 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqplx\" (UniqueName: \"kubernetes.io/projected/81ae9401-a8eb-42ac-8078-a8399ec03616-kube-api-access-mqplx\") pod \"81ae9401-a8eb-42ac-8078-a8399ec03616\" (UID: \"81ae9401-a8eb-42ac-8078-a8399ec03616\") " Feb 12 19:18:42.892247 kubelet[1973]: I0212 19:18:42.892223 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:42.892343 kubelet[1973]: I0212 19:18:42.892257 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-hostproc" (OuterVolumeSpecName: "hostproc") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:42.892405 kubelet[1973]: I0212 19:18:42.892282 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:42.892461 kubelet[1973]: I0212 19:18:42.892294 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:42.892688 kubelet[1973]: I0212 19:18:42.892651 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:42.893055 kubelet[1973]: I0212 19:18:42.893003 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:42.893264 kubelet[1973]: I0212 19:18:42.893242 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:42.893356 kubelet[1973]: I0212 19:18:42.893242 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cni-path" (OuterVolumeSpecName: "cni-path") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:42.893441 kubelet[1973]: I0212 19:18:42.893427 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:42.893512 kubelet[1973]: I0212 19:18:42.893499 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:42.895309 kubelet[1973]: I0212 19:18:42.895228 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:18:42.895505 kubelet[1973]: I0212 19:18:42.895475 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e25e999-c607-4c7e-9400-b44195b742b4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:18:42.896016 kubelet[1973]: I0212 19:18:42.895973 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e25e999-c607-4c7e-9400-b44195b742b4-kube-api-access-ps7g2" (OuterVolumeSpecName: "kube-api-access-ps7g2") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "kube-api-access-ps7g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:18:42.896201 kubelet[1973]: I0212 19:18:42.896155 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ae9401-a8eb-42ac-8078-a8399ec03616-kube-api-access-mqplx" (OuterVolumeSpecName: "kube-api-access-mqplx") pod "81ae9401-a8eb-42ac-8078-a8399ec03616" (UID: "81ae9401-a8eb-42ac-8078-a8399ec03616"). InnerVolumeSpecName "kube-api-access-mqplx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:18:42.897321 kubelet[1973]: I0212 19:18:42.897288 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ae9401-a8eb-42ac-8078-a8399ec03616-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "81ae9401-a8eb-42ac-8078-a8399ec03616" (UID: "81ae9401-a8eb-42ac-8078-a8399ec03616"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:18:42.897795 kubelet[1973]: I0212 19:18:42.897765 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e25e999-c607-4c7e-9400-b44195b742b4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3e25e999-c607-4c7e-9400-b44195b742b4" (UID: "3e25e999-c607-4c7e-9400-b44195b742b4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:18:42.976498 kubelet[1973]: I0212 19:18:42.976458 1973 scope.go:117] "RemoveContainer" containerID="de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8" Feb 12 19:18:42.979469 env[1141]: time="2024-02-12T19:18:42.979334540Z" level=info msg="RemoveContainer for \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\"" Feb 12 19:18:42.980128 systemd[1]: Removed slice kubepods-besteffort-pod81ae9401_a8eb_42ac_8078_a8399ec03616.slice. Feb 12 19:18:42.983047 env[1141]: time="2024-02-12T19:18:42.983000359Z" level=info msg="RemoveContainer for \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\" returns successfully" Feb 12 19:18:42.983196 kubelet[1973]: I0212 19:18:42.983175 1973 scope.go:117] "RemoveContainer" containerID="de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8" Feb 12 19:18:42.983707 env[1141]: time="2024-02-12T19:18:42.983630747Z" level=error msg="ContainerStatus for \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\": not found" Feb 12 19:18:42.987925 kubelet[1973]: E0212 19:18:42.987887 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\": not found" containerID="de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8" Feb 12 19:18:42.988244 kubelet[1973]: I0212 19:18:42.988214 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8"} err="failed to get container status \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"de379946f519e033ab5f49e65a0fc2a6ad6517673b1bfea3f6ffa314be0ec0b8\": not found" Feb 12 19:18:42.988244 kubelet[1973]: I0212 19:18:42.988240 1973 scope.go:117] "RemoveContainer" containerID="0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7" Feb 12 19:18:42.990005 env[1141]: time="2024-02-12T19:18:42.989976026Z" level=info msg="RemoveContainer for \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\"" Feb 12 19:18:42.991584 kubelet[1973]: I0212 19:18:42.991566 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991648 kubelet[1973]: I0212 19:18:42.991589 1973 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991648 kubelet[1973]: I0212 19:18:42.991601 1973 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991648 kubelet[1973]: I0212 19:18:42.991611 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81ae9401-a8eb-42ac-8078-a8399ec03616-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991648 kubelet[1973]: I0212 19:18:42.991620 1973 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e25e999-c607-4c7e-9400-b44195b742b4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991744 kubelet[1973]: I0212 19:18:42.991629 1973 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991744 kubelet[1973]: I0212 19:18:42.991663 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991744 kubelet[1973]: I0212 19:18:42.991672 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991813 kubelet[1973]: I0212 19:18:42.991681 1973 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mqplx\" (UniqueName: \"kubernetes.io/projected/81ae9401-a8eb-42ac-8078-a8399ec03616-kube-api-access-mqplx\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991813 kubelet[1973]: I0212 19:18:42.991792 1973 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991890 kubelet[1973]: I0212 19:18:42.991861 1973 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e25e999-c607-4c7e-9400-b44195b742b4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991890 kubelet[1973]: I0212 19:18:42.991876 1973 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991890 kubelet[1973]: I0212 19:18:42.991885 1973 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991950 kubelet[1973]: I0212 19:18:42.991894 1973 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991950 kubelet[1973]: I0212 19:18:42.991904 1973 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ps7g2\" (UniqueName: \"kubernetes.io/projected/3e25e999-c607-4c7e-9400-b44195b742b4-kube-api-access-ps7g2\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.991950 kubelet[1973]: I0212 19:18:42.991913 1973 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e25e999-c607-4c7e-9400-b44195b742b4-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:42.992890 systemd[1]: Removed slice kubepods-burstable-pod3e25e999_c607_4c7e_9400_b44195b742b4.slice. Feb 12 19:18:42.992971 systemd[1]: kubepods-burstable-pod3e25e999_c607_4c7e_9400_b44195b742b4.slice: Consumed 6.980s CPU time. Feb 12 19:18:42.995427 env[1141]: time="2024-02-12T19:18:42.995374742Z" level=info msg="RemoveContainer for \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\" returns successfully" Feb 12 19:18:42.995565 kubelet[1973]: I0212 19:18:42.995535 1973 scope.go:117] "RemoveContainer" containerID="671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04" Feb 12 19:18:42.998620 env[1141]: time="2024-02-12T19:18:42.998582518Z" level=info msg="RemoveContainer for \"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04\"" Feb 12 19:18:43.003491 env[1141]: time="2024-02-12T19:18:43.003434694Z" level=info msg="RemoveContainer for \"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04\" returns successfully" Feb 12 19:18:43.003684 kubelet[1973]: I0212 19:18:43.003646 1973 scope.go:117] "RemoveContainer" containerID="fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47" Feb 12 19:18:43.004757 env[1141]: time="2024-02-12T19:18:43.004688438Z" level=info msg="RemoveContainer for \"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47\"" Feb 12 19:18:43.006829 env[1141]: time="2024-02-12T19:18:43.006784357Z" level=info msg="RemoveContainer for \"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47\" returns successfully" Feb 12 19:18:43.007071 kubelet[1973]: I0212 19:18:43.007024 1973 scope.go:117] "RemoveContainer" containerID="a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4" Feb 12 19:18:43.008526 env[1141]: time="2024-02-12T19:18:43.008258524Z" level=info msg="RemoveContainer for \"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4\"" Feb 12 19:18:43.010350 env[1141]: time="2024-02-12T19:18:43.010264250Z" level=info msg="RemoveContainer for \"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4\" returns successfully" Feb 12 19:18:43.010453 kubelet[1973]: I0212 19:18:43.010429 1973 scope.go:117] "RemoveContainer" containerID="066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6" Feb 12 19:18:43.011578 env[1141]: time="2024-02-12T19:18:43.011343648Z" level=info msg="RemoveContainer for \"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6\"" Feb 12 19:18:43.013406 env[1141]: time="2024-02-12T19:18:43.013377132Z" level=info msg="RemoveContainer for \"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6\" returns successfully" Feb 12 19:18:43.013630 kubelet[1973]: I0212 19:18:43.013612 1973 scope.go:117] "RemoveContainer" containerID="0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7" Feb 12 19:18:43.013884 env[1141]: time="2024-02-12T19:18:43.013795540Z" level=error msg="ContainerStatus for \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\": not found" Feb 12 19:18:43.014090 kubelet[1973]: E0212 19:18:43.014074 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\": not found" containerID="0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7" Feb 12 19:18:43.014136 kubelet[1973]: I0212 19:18:43.014109 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7"} err="failed to get container status \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7\": not found" Feb 12 19:18:43.014136 kubelet[1973]: I0212 19:18:43.014119 1973 scope.go:117] "RemoveContainer" containerID="671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04" Feb 12 19:18:43.014341 env[1141]: time="2024-02-12T19:18:43.014294342Z" level=error msg="ContainerStatus for \"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04\": not found" Feb 12 19:18:43.014518 kubelet[1973]: E0212 19:18:43.014502 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04\": not found" containerID="671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04" Feb 12 19:18:43.014563 kubelet[1973]: I0212 19:18:43.014529 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04"} err="failed to get container status \"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04\": rpc error: code = NotFound desc = an error occurred when try to find container \"671c7e91ba649351d2db6e81cd48b6132174b68bc22707f82dd776341ac8cf04\": not found" Feb 12 19:18:43.014563 kubelet[1973]: I0212 19:18:43.014541 1973 scope.go:117] "RemoveContainer" containerID="fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47" Feb 12 19:18:43.014744 env[1141]: time="2024-02-12T19:18:43.014703750Z" level=error msg="ContainerStatus for \"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47\": not found" Feb 12 19:18:43.014934 kubelet[1973]: E0212 19:18:43.014922 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47\": not found" containerID="fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47" Feb 12 19:18:43.014994 kubelet[1973]: I0212 19:18:43.014965 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47"} err="failed to get container status \"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc031b714c778360c441c890340a4d5b9ad6ce2208a65c6f0984625937759e47\": not found" Feb 12 19:18:43.014994 kubelet[1973]: I0212 19:18:43.014975 1973 scope.go:117] "RemoveContainer" containerID="a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4" Feb 12 19:18:43.015194 env[1141]: time="2024-02-12T19:18:43.015152556Z" level=error msg="ContainerStatus for \"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4\": not found" Feb 12 19:18:43.015361 kubelet[1973]: E0212 19:18:43.015347 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4\": not found" containerID="a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4" Feb 12 19:18:43.015405 kubelet[1973]: I0212 19:18:43.015371 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4"} err="failed to get container status \"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a93a82e9b5919350c4a6fe26dccd44f0c4fa33c742c849a9473e3aa2796769d4\": not found" Feb 12 19:18:43.015405 kubelet[1973]: I0212 19:18:43.015380 1973 scope.go:117] "RemoveContainer" containerID="066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6" Feb 12 19:18:43.015611 env[1141]: time="2024-02-12T19:18:43.015567564Z" level=error msg="ContainerStatus for \"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6\": not found" Feb 12 19:18:43.015778 kubelet[1973]: E0212 19:18:43.015765 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6\": not found" containerID="066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6" Feb 12 19:18:43.015842 kubelet[1973]: I0212 19:18:43.015787 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6"} err="failed to get container status \"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6\": rpc error: code = NotFound desc = an error occurred when try to find container \"066c4d0239872fa7db2171bcf0732561602b75186f9298169c7aea81ddb9ecd6\": not found" Feb 12 19:18:43.647870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e1dd577ceea7508e79af8bd526565bb44a86162a202a76e83c73d583f9e3cf7-rootfs.mount: Deactivated successfully. Feb 12 19:18:43.647979 systemd[1]: var-lib-kubelet-pods-81ae9401\x2da8eb\x2d42ac\x2d8078\x2da8399ec03616-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmqplx.mount: Deactivated successfully. Feb 12 19:18:43.648042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793-rootfs.mount: Deactivated successfully. Feb 12 19:18:43.648098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31b1a226b69535d23c26c2af7f314b712998bcddd9d62fc7ebba6cfd9c86e793-shm.mount: Deactivated successfully. Feb 12 19:18:43.648151 systemd[1]: var-lib-kubelet-pods-3e25e999\x2dc607\x2d4c7e\x2d9400\x2db44195b742b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dps7g2.mount: Deactivated successfully. Feb 12 19:18:43.648210 systemd[1]: var-lib-kubelet-pods-3e25e999\x2dc607\x2d4c7e\x2d9400\x2db44195b742b4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:18:43.648261 systemd[1]: var-lib-kubelet-pods-3e25e999\x2dc607\x2d4c7e\x2d9400\x2db44195b742b4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:18:43.802484 kubelet[1973]: E0212 19:18:43.802454 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:44.591613 sshd[3542]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:44.593919 systemd[1]: sshd@20-10.0.0.60:22-10.0.0.1:45094.service: Deactivated successfully. Feb 12 19:18:44.594531 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:18:44.594710 systemd[1]: session-21.scope: Consumed 1.442s CPU time. Feb 12 19:18:44.595129 systemd-logind[1126]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:18:44.596425 systemd[1]: Started sshd@21-10.0.0.60:22-10.0.0.1:54576.service. Feb 12 19:18:44.597353 systemd-logind[1126]: Removed session 21. Feb 12 19:18:44.631210 sshd[3707]: Accepted publickey for core from 10.0.0.1 port 54576 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:44.632668 sshd[3707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:44.636224 systemd-logind[1126]: New session 22 of user core. Feb 12 19:18:44.637133 systemd[1]: Started session-22.scope. Feb 12 19:18:44.804183 kubelet[1973]: I0212 19:18:44.804152 1973 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3e25e999-c607-4c7e-9400-b44195b742b4" path="/var/lib/kubelet/pods/3e25e999-c607-4c7e-9400-b44195b742b4/volumes" Feb 12 19:18:44.805109 kubelet[1973]: I0212 19:18:44.805085 1973 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81ae9401-a8eb-42ac-8078-a8399ec03616" path="/var/lib/kubelet/pods/81ae9401-a8eb-42ac-8078-a8399ec03616/volumes" Feb 12 19:18:46.011145 sshd[3707]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:46.014848 systemd[1]: Started sshd@22-10.0.0.60:22-10.0.0.1:54588.service. Feb 12 19:18:46.015412 systemd[1]: sshd@21-10.0.0.60:22-10.0.0.1:54576.service: Deactivated successfully. Feb 12 19:18:46.016212 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:18:46.016393 systemd[1]: session-22.scope: Consumed 1.292s CPU time. Feb 12 19:18:46.016884 systemd-logind[1126]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:18:46.019259 systemd-logind[1126]: Removed session 22. Feb 12 19:18:46.033383 kubelet[1973]: I0212 19:18:46.033333 1973 topology_manager.go:215] "Topology Admit Handler" podUID="efdcb056-c3cb-4537-9c42-9cbacdb55e9f" podNamespace="kube-system" podName="cilium-t2wpx" Feb 12 19:18:46.033690 kubelet[1973]: E0212 19:18:46.033429 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e25e999-c607-4c7e-9400-b44195b742b4" containerName="mount-cgroup" Feb 12 19:18:46.033690 kubelet[1973]: E0212 19:18:46.033443 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e25e999-c607-4c7e-9400-b44195b742b4" containerName="apply-sysctl-overwrites" Feb 12 19:18:46.033690 kubelet[1973]: E0212 19:18:46.033452 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e25e999-c607-4c7e-9400-b44195b742b4" containerName="mount-bpf-fs" Feb 12 19:18:46.033690 kubelet[1973]: E0212 19:18:46.033462 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81ae9401-a8eb-42ac-8078-a8399ec03616" containerName="cilium-operator" Feb 12 19:18:46.033690 kubelet[1973]: E0212 19:18:46.033468 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e25e999-c607-4c7e-9400-b44195b742b4" containerName="clean-cilium-state" Feb 12 19:18:46.033690 kubelet[1973]: E0212 19:18:46.033476 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e25e999-c607-4c7e-9400-b44195b742b4" containerName="cilium-agent" Feb 12 19:18:46.033690 kubelet[1973]: I0212 19:18:46.033505 1973 memory_manager.go:346] "RemoveStaleState removing state" podUID="81ae9401-a8eb-42ac-8078-a8399ec03616" containerName="cilium-operator" Feb 12 19:18:46.033690 kubelet[1973]: I0212 19:18:46.033511 1973 memory_manager.go:346] "RemoveStaleState removing state" podUID="3e25e999-c607-4c7e-9400-b44195b742b4" containerName="cilium-agent" Feb 12 19:18:46.039452 systemd[1]: Created slice kubepods-burstable-podefdcb056_c3cb_4537_9c42_9cbacdb55e9f.slice. Feb 12 19:18:46.067706 sshd[3719]: Accepted publickey for core from 10.0.0.1 port 54588 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:46.069547 sshd[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:46.073110 systemd-logind[1126]: New session 23 of user core. Feb 12 19:18:46.074013 systemd[1]: Started session-23.scope. Feb 12 19:18:46.106234 kubelet[1973]: I0212 19:18:46.106187 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fxvv\" (UniqueName: \"kubernetes.io/projected/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-kube-api-access-7fxvv\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106234 kubelet[1973]: I0212 19:18:46.106241 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-bpf-maps\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106378 kubelet[1973]: I0212 19:18:46.106260 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cni-path\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106378 kubelet[1973]: I0212 19:18:46.106290 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-etc-cni-netd\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106378 kubelet[1973]: I0212 19:18:46.106311 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-clustermesh-secrets\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106378 kubelet[1973]: I0212 19:18:46.106331 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-config-path\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106378 kubelet[1973]: I0212 19:18:46.106357 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-ipsec-secrets\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106508 kubelet[1973]: I0212 19:18:46.106383 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-host-proc-sys-net\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106508 kubelet[1973]: I0212 19:18:46.106412 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-run\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106508 kubelet[1973]: I0212 19:18:46.106438 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-cgroup\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106508 kubelet[1973]: I0212 19:18:46.106456 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-lib-modules\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106508 kubelet[1973]: I0212 19:18:46.106474 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-xtables-lock\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106508 kubelet[1973]: I0212 19:18:46.106492 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-host-proc-sys-kernel\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106664 kubelet[1973]: I0212 19:18:46.106519 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-hubble-tls\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.106664 kubelet[1973]: I0212 19:18:46.106539 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-hostproc\") pod \"cilium-t2wpx\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " pod="kube-system/cilium-t2wpx" Feb 12 19:18:46.194762 sshd[3719]: pam_unix(sshd:session): session closed for user core Feb 12 19:18:46.198330 systemd[1]: Started sshd@23-10.0.0.60:22-10.0.0.1:54594.service. Feb 12 19:18:46.201770 kubelet[1973]: E0212 19:18:46.200929 1973 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-7fxvv lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-t2wpx" podUID="efdcb056-c3cb-4537-9c42-9cbacdb55e9f" Feb 12 19:18:46.203542 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:18:46.206735 systemd[1]: sshd@22-10.0.0.60:22-10.0.0.1:54588.service: Deactivated successfully. Feb 12 19:18:46.220205 systemd-logind[1126]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:18:46.223289 systemd-logind[1126]: Removed session 23. Feb 12 19:18:46.239936 sshd[3732]: Accepted publickey for core from 10.0.0.1 port 54594 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:18:46.241186 sshd[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:46.244884 systemd-logind[1126]: New session 24 of user core. Feb 12 19:18:46.245452 systemd[1]: Started session-24.scope. Feb 12 19:18:46.859714 kubelet[1973]: E0212 19:18:46.859680 1973 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:18:47.112451 kubelet[1973]: I0212 19:18:47.112352 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-clustermesh-secrets\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.112809 kubelet[1973]: I0212 19:18:47.112794 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-config-path\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.112924 kubelet[1973]: I0212 19:18:47.112912 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-run\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.112992 kubelet[1973]: I0212 19:18:47.112983 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-cgroup\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.113084 kubelet[1973]: I0212 19:18:47.113073 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-hostproc\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.113143 kubelet[1973]: I0212 19:18:47.113003 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:47.113177 kubelet[1973]: I0212 19:18:47.113028 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:47.113177 kubelet[1973]: I0212 19:18:47.113165 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-hostproc" (OuterVolumeSpecName: "hostproc") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:47.113261 kubelet[1973]: I0212 19:18:47.113250 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-etc-cni-netd\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.113345 kubelet[1973]: I0212 19:18:47.113332 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-ipsec-secrets\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.113420 kubelet[1973]: I0212 19:18:47.113411 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fxvv\" (UniqueName: \"kubernetes.io/projected/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-kube-api-access-7fxvv\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.113492 kubelet[1973]: I0212 19:18:47.113481 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-bpf-maps\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.113584 kubelet[1973]: I0212 19:18:47.113574 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-lib-modules\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.113654 kubelet[1973]: I0212 19:18:47.113644 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cni-path\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.113724 kubelet[1973]: I0212 19:18:47.113715 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-host-proc-sys-kernel\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.113791 kubelet[1973]: I0212 19:18:47.113781 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-xtables-lock\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.113881 kubelet[1973]: I0212 19:18:47.113870 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-hubble-tls\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.113962 kubelet[1973]: I0212 19:18:47.113951 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-host-proc-sys-net\") pod \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\" (UID: \"efdcb056-c3cb-4537-9c42-9cbacdb55e9f\") " Feb 12 19:18:47.114064 kubelet[1973]: I0212 19:18:47.114052 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.114132 kubelet[1973]: I0212 19:18:47.114122 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.114185 kubelet[1973]: I0212 19:18:47.114177 1973 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.114272 kubelet[1973]: I0212 19:18:47.114258 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:47.114329 kubelet[1973]: I0212 19:18:47.113251 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:47.114534 kubelet[1973]: I0212 19:18:47.114488 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:18:47.114593 kubelet[1973]: I0212 19:18:47.114547 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cni-path" (OuterVolumeSpecName: "cni-path") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:47.114593 kubelet[1973]: I0212 19:18:47.114568 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:47.114593 kubelet[1973]: I0212 19:18:47.114584 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:47.114666 kubelet[1973]: I0212 19:18:47.114599 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:47.114666 kubelet[1973]: I0212 19:18:47.114615 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:18:47.116426 kubelet[1973]: I0212 19:18:47.116387 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:18:47.117597 systemd[1]: var-lib-kubelet-pods-efdcb056\x2dc3cb\x2d4537\x2d9c42\x2d9cbacdb55e9f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7fxvv.mount: Deactivated successfully. Feb 12 19:18:47.117694 systemd[1]: var-lib-kubelet-pods-efdcb056\x2dc3cb\x2d4537\x2d9c42\x2d9cbacdb55e9f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:18:47.117747 systemd[1]: var-lib-kubelet-pods-efdcb056\x2dc3cb\x2d4537\x2d9c42\x2d9cbacdb55e9f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:18:47.118352 kubelet[1973]: I0212 19:18:47.118229 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:18:47.118620 kubelet[1973]: I0212 19:18:47.118593 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-kube-api-access-7fxvv" (OuterVolumeSpecName: "kube-api-access-7fxvv") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "kube-api-access-7fxvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:18:47.118711 kubelet[1973]: I0212 19:18:47.118627 1973 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "efdcb056-c3cb-4537-9c42-9cbacdb55e9f" (UID: "efdcb056-c3cb-4537-9c42-9cbacdb55e9f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:18:47.119543 systemd[1]: var-lib-kubelet-pods-efdcb056\x2dc3cb\x2d4537\x2d9c42\x2d9cbacdb55e9f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:18:47.214798 kubelet[1973]: I0212 19:18:47.214762 1973 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.214993 kubelet[1973]: I0212 19:18:47.214978 1973 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.215053 kubelet[1973]: I0212 19:18:47.215044 1973 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.215106 kubelet[1973]: I0212 19:18:47.215097 1973 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.215159 kubelet[1973]: I0212 19:18:47.215149 1973 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.215217 kubelet[1973]: I0212 19:18:47.215208 1973 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.215300 kubelet[1973]: I0212 19:18:47.215261 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.215358 kubelet[1973]: I0212 19:18:47.215349 1973 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.215411 kubelet[1973]: I0212 19:18:47.215403 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.215464 kubelet[1973]: I0212 19:18:47.215455 1973 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7fxvv\" (UniqueName: \"kubernetes.io/projected/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-kube-api-access-7fxvv\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.215517 kubelet[1973]: I0212 19:18:47.215508 1973 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:47.215591 kubelet[1973]: I0212 19:18:47.215581 1973 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efdcb056-c3cb-4537-9c42-9cbacdb55e9f-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 19:18:48.000301 systemd[1]: Removed slice kubepods-burstable-podefdcb056_c3cb_4537_9c42_9cbacdb55e9f.slice. Feb 12 19:18:48.028455 kubelet[1973]: I0212 19:18:48.028410 1973 topology_manager.go:215] "Topology Admit Handler" podUID="2afb901f-139c-40ac-b29d-8cbc3775401a" podNamespace="kube-system" podName="cilium-mhflc" Feb 12 19:18:48.033581 systemd[1]: Created slice kubepods-burstable-pod2afb901f_139c_40ac_b29d_8cbc3775401a.slice. Feb 12 19:18:48.122583 kubelet[1973]: I0212 19:18:48.122551 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2afb901f-139c-40ac-b29d-8cbc3775401a-etc-cni-netd\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.122583 kubelet[1973]: I0212 19:18:48.122593 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2afb901f-139c-40ac-b29d-8cbc3775401a-hostproc\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.122957 kubelet[1973]: I0212 19:18:48.122613 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbsng\" (UniqueName: \"kubernetes.io/projected/2afb901f-139c-40ac-b29d-8cbc3775401a-kube-api-access-fbsng\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.122957 kubelet[1973]: I0212 19:18:48.122635 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2afb901f-139c-40ac-b29d-8cbc3775401a-host-proc-sys-kernel\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.122957 kubelet[1973]: I0212 19:18:48.122701 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2afb901f-139c-40ac-b29d-8cbc3775401a-hubble-tls\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.122957 kubelet[1973]: I0212 19:18:48.122737 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2afb901f-139c-40ac-b29d-8cbc3775401a-cilium-run\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.122957 kubelet[1973]: I0212 19:18:48.122757 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2afb901f-139c-40ac-b29d-8cbc3775401a-lib-modules\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.122957 kubelet[1973]: I0212 19:18:48.122792 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2afb901f-139c-40ac-b29d-8cbc3775401a-xtables-lock\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.123140 kubelet[1973]: I0212 19:18:48.122832 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2afb901f-139c-40ac-b29d-8cbc3775401a-host-proc-sys-net\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.123140 kubelet[1973]: I0212 19:18:48.122859 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2afb901f-139c-40ac-b29d-8cbc3775401a-bpf-maps\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.123140 kubelet[1973]: I0212 19:18:48.122878 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2afb901f-139c-40ac-b29d-8cbc3775401a-cilium-cgroup\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.123140 kubelet[1973]: I0212 19:18:48.122916 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2afb901f-139c-40ac-b29d-8cbc3775401a-cni-path\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.123140 kubelet[1973]: I0212 19:18:48.122946 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2afb901f-139c-40ac-b29d-8cbc3775401a-cilium-ipsec-secrets\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.123140 kubelet[1973]: I0212 19:18:48.122966 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2afb901f-139c-40ac-b29d-8cbc3775401a-cilium-config-path\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.123283 kubelet[1973]: I0212 19:18:48.122998 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2afb901f-139c-40ac-b29d-8cbc3775401a-clustermesh-secrets\") pod \"cilium-mhflc\" (UID: \"2afb901f-139c-40ac-b29d-8cbc3775401a\") " pod="kube-system/cilium-mhflc" Feb 12 19:18:48.335964 kubelet[1973]: E0212 19:18:48.335937 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:48.336811 env[1141]: time="2024-02-12T19:18:48.336427957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mhflc,Uid:2afb901f-139c-40ac-b29d-8cbc3775401a,Namespace:kube-system,Attempt:0,}" Feb 12 19:18:48.347977 env[1141]: time="2024-02-12T19:18:48.347860128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:18:48.347977 env[1141]: time="2024-02-12T19:18:48.347947444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:18:48.347977 env[1141]: time="2024-02-12T19:18:48.347958843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:18:48.348247 env[1141]: time="2024-02-12T19:18:48.348192031Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0 pid=3762 runtime=io.containerd.runc.v2 Feb 12 19:18:48.358406 systemd[1]: Started cri-containerd-126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0.scope. Feb 12 19:18:48.402786 env[1141]: time="2024-02-12T19:18:48.402746302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mhflc,Uid:2afb901f-139c-40ac-b29d-8cbc3775401a,Namespace:kube-system,Attempt:0,} returns sandbox id \"126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0\"" Feb 12 19:18:48.404098 kubelet[1973]: E0212 19:18:48.404077 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:48.407624 env[1141]: time="2024-02-12T19:18:48.407589093Z" level=info msg="CreateContainer within sandbox \"126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:18:48.416471 env[1141]: time="2024-02-12T19:18:48.416427118Z" level=info msg="CreateContainer within sandbox \"126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43c1e34344bc0cdd7abb3420c9d070a6d82df9c4d53881a0bccb5451323064a6\"" Feb 12 19:18:48.416914 env[1141]: time="2024-02-12T19:18:48.416888014Z" level=info msg="StartContainer for \"43c1e34344bc0cdd7abb3420c9d070a6d82df9c4d53881a0bccb5451323064a6\"" Feb 12 19:18:48.430035 systemd[1]: Started cri-containerd-43c1e34344bc0cdd7abb3420c9d070a6d82df9c4d53881a0bccb5451323064a6.scope. Feb 12 19:18:48.464935 env[1141]: time="2024-02-12T19:18:48.464891503Z" level=info msg="StartContainer for \"43c1e34344bc0cdd7abb3420c9d070a6d82df9c4d53881a0bccb5451323064a6\" returns successfully" Feb 12 19:18:48.471663 systemd[1]: cri-containerd-43c1e34344bc0cdd7abb3420c9d070a6d82df9c4d53881a0bccb5451323064a6.scope: Deactivated successfully. Feb 12 19:18:48.496857 env[1141]: time="2024-02-12T19:18:48.496800060Z" level=info msg="shim disconnected" id=43c1e34344bc0cdd7abb3420c9d070a6d82df9c4d53881a0bccb5451323064a6 Feb 12 19:18:48.497033 env[1141]: time="2024-02-12T19:18:48.496859737Z" level=warning msg="cleaning up after shim disconnected" id=43c1e34344bc0cdd7abb3420c9d070a6d82df9c4d53881a0bccb5451323064a6 namespace=k8s.io Feb 12 19:18:48.497033 env[1141]: time="2024-02-12T19:18:48.496869896Z" level=info msg="cleaning up dead shim" Feb 12 19:18:48.502919 env[1141]: time="2024-02-12T19:18:48.502875547Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3844 runtime=io.containerd.runc.v2\n" Feb 12 19:18:48.806230 kubelet[1973]: I0212 19:18:48.805773 1973 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="efdcb056-c3cb-4537-9c42-9cbacdb55e9f" path="/var/lib/kubelet/pods/efdcb056-c3cb-4537-9c42-9cbacdb55e9f/volumes" Feb 12 19:18:48.969149 kubelet[1973]: I0212 19:18:48.969101 1973 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-12T19:18:48Z","lastTransitionTime":"2024-02-12T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 12 19:18:49.000160 kubelet[1973]: E0212 19:18:49.000136 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:49.002762 env[1141]: time="2024-02-12T19:18:49.002707261Z" level=info msg="CreateContainer within sandbox \"126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:18:49.011863 env[1141]: time="2024-02-12T19:18:49.011800115Z" level=info msg="CreateContainer within sandbox \"126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc635d30f4270f55dd676c29bfb8070ec015e5d6fc7257f2b46574e603aa9d80\"" Feb 12 19:18:49.012366 env[1141]: time="2024-02-12T19:18:49.012314531Z" level=info msg="StartContainer for \"dc635d30f4270f55dd676c29bfb8070ec015e5d6fc7257f2b46574e603aa9d80\"" Feb 12 19:18:49.027210 systemd[1]: Started cri-containerd-dc635d30f4270f55dd676c29bfb8070ec015e5d6fc7257f2b46574e603aa9d80.scope. Feb 12 19:18:49.056885 env[1141]: time="2024-02-12T19:18:49.056785244Z" level=info msg="StartContainer for \"dc635d30f4270f55dd676c29bfb8070ec015e5d6fc7257f2b46574e603aa9d80\" returns successfully" Feb 12 19:18:49.067600 systemd[1]: cri-containerd-dc635d30f4270f55dd676c29bfb8070ec015e5d6fc7257f2b46574e603aa9d80.scope: Deactivated successfully. Feb 12 19:18:49.085521 env[1141]: time="2024-02-12T19:18:49.085479898Z" level=info msg="shim disconnected" id=dc635d30f4270f55dd676c29bfb8070ec015e5d6fc7257f2b46574e603aa9d80 Feb 12 19:18:49.085712 env[1141]: time="2024-02-12T19:18:49.085695208Z" level=warning msg="cleaning up after shim disconnected" id=dc635d30f4270f55dd676c29bfb8070ec015e5d6fc7257f2b46574e603aa9d80 namespace=k8s.io Feb 12 19:18:49.085775 env[1141]: time="2024-02-12T19:18:49.085763325Z" level=info msg="cleaning up dead shim" Feb 12 19:18:49.092626 env[1141]: time="2024-02-12T19:18:49.092591444Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3907 runtime=io.containerd.runc.v2\n" Feb 12 19:18:50.004703 kubelet[1973]: E0212 19:18:50.004650 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:50.007160 env[1141]: time="2024-02-12T19:18:50.007128208Z" level=info msg="CreateContainer within sandbox \"126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:18:50.037203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3387967235.mount: Deactivated successfully. Feb 12 19:18:50.041082 env[1141]: time="2024-02-12T19:18:50.041022888Z" level=info msg="CreateContainer within sandbox \"126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4cf1c9d679ba05e3727de2697bf46b1edc5ddc666ebd51112cb71e638e911efa\"" Feb 12 19:18:50.041633 env[1141]: time="2024-02-12T19:18:50.041598583Z" level=info msg="StartContainer for \"4cf1c9d679ba05e3727de2697bf46b1edc5ddc666ebd51112cb71e638e911efa\"" Feb 12 19:18:50.059653 systemd[1]: Started cri-containerd-4cf1c9d679ba05e3727de2697bf46b1edc5ddc666ebd51112cb71e638e911efa.scope. Feb 12 19:18:50.094068 systemd[1]: cri-containerd-4cf1c9d679ba05e3727de2697bf46b1edc5ddc666ebd51112cb71e638e911efa.scope: Deactivated successfully. Feb 12 19:18:50.096161 env[1141]: time="2024-02-12T19:18:50.096119867Z" level=info msg="StartContainer for \"4cf1c9d679ba05e3727de2697bf46b1edc5ddc666ebd51112cb71e638e911efa\" returns successfully" Feb 12 19:18:50.120444 env[1141]: time="2024-02-12T19:18:50.120397555Z" level=info msg="shim disconnected" id=4cf1c9d679ba05e3727de2697bf46b1edc5ddc666ebd51112cb71e638e911efa Feb 12 19:18:50.120698 env[1141]: time="2024-02-12T19:18:50.120678703Z" level=warning msg="cleaning up after shim disconnected" id=4cf1c9d679ba05e3727de2697bf46b1edc5ddc666ebd51112cb71e638e911efa namespace=k8s.io Feb 12 19:18:50.120779 env[1141]: time="2024-02-12T19:18:50.120765580Z" level=info msg="cleaning up dead shim" Feb 12 19:18:50.127533 env[1141]: time="2024-02-12T19:18:50.127502453Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3963 runtime=io.containerd.runc.v2\n" Feb 12 19:18:50.227969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cf1c9d679ba05e3727de2697bf46b1edc5ddc666ebd51112cb71e638e911efa-rootfs.mount: Deactivated successfully. Feb 12 19:18:51.007545 kubelet[1973]: E0212 19:18:51.007491 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:51.009408 env[1141]: time="2024-02-12T19:18:51.009358461Z" level=info msg="CreateContainer within sandbox \"126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:18:51.034770 env[1141]: time="2024-02-12T19:18:51.034715372Z" level=info msg="CreateContainer within sandbox \"126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fabf6b91cce31d94393d2d6793189d1dbf9b86b37097cd80fe4c5a3661c5e7bd\"" Feb 12 19:18:51.035440 env[1141]: time="2024-02-12T19:18:51.035408546Z" level=info msg="StartContainer for \"fabf6b91cce31d94393d2d6793189d1dbf9b86b37097cd80fe4c5a3661c5e7bd\"" Feb 12 19:18:51.052925 systemd[1]: Started cri-containerd-fabf6b91cce31d94393d2d6793189d1dbf9b86b37097cd80fe4c5a3661c5e7bd.scope. Feb 12 19:18:51.083445 systemd[1]: cri-containerd-fabf6b91cce31d94393d2d6793189d1dbf9b86b37097cd80fe4c5a3661c5e7bd.scope: Deactivated successfully. Feb 12 19:18:51.086611 env[1141]: time="2024-02-12T19:18:51.086555832Z" level=info msg="StartContainer for \"fabf6b91cce31d94393d2d6793189d1dbf9b86b37097cd80fe4c5a3661c5e7bd\" returns successfully" Feb 12 19:18:51.107753 env[1141]: time="2024-02-12T19:18:51.107704865Z" level=info msg="shim disconnected" id=fabf6b91cce31d94393d2d6793189d1dbf9b86b37097cd80fe4c5a3661c5e7bd Feb 12 19:18:51.107753 env[1141]: time="2024-02-12T19:18:51.107751943Z" level=warning msg="cleaning up after shim disconnected" id=fabf6b91cce31d94393d2d6793189d1dbf9b86b37097cd80fe4c5a3661c5e7bd namespace=k8s.io Feb 12 19:18:51.107986 env[1141]: time="2024-02-12T19:18:51.107761622Z" level=info msg="cleaning up dead shim" Feb 12 19:18:51.114306 env[1141]: time="2024-02-12T19:18:51.114263934Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4018 runtime=io.containerd.runc.v2\n" Feb 12 19:18:51.228028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fabf6b91cce31d94393d2d6793189d1dbf9b86b37097cd80fe4c5a3661c5e7bd-rootfs.mount: Deactivated successfully. Feb 12 19:18:51.861230 kubelet[1973]: E0212 19:18:51.861200 1973 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:18:52.011796 kubelet[1973]: E0212 19:18:52.011753 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:52.015278 env[1141]: time="2024-02-12T19:18:52.015235060Z" level=info msg="CreateContainer within sandbox \"126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:18:52.038601 env[1141]: time="2024-02-12T19:18:52.038555987Z" level=info msg="CreateContainer within sandbox \"126f69c89eb969518d176c889f56661ae52d3bd635ca130dd51a670e628d3cc0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0613065077c3c0445aa1596f4943fe09c1c150811ad08b3c9a28f404857bcc61\"" Feb 12 19:18:52.039372 env[1141]: time="2024-02-12T19:18:52.039324840Z" level=info msg="StartContainer for \"0613065077c3c0445aa1596f4943fe09c1c150811ad08b3c9a28f404857bcc61\"" Feb 12 19:18:52.053602 systemd[1]: Started cri-containerd-0613065077c3c0445aa1596f4943fe09c1c150811ad08b3c9a28f404857bcc61.scope. Feb 12 19:18:52.086085 env[1141]: time="2024-02-12T19:18:52.086028251Z" level=info msg="StartContainer for \"0613065077c3c0445aa1596f4943fe09c1c150811ad08b3c9a28f404857bcc61\" returns successfully" Feb 12 19:18:52.323866 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 12 19:18:53.017440 kubelet[1973]: E0212 19:18:53.017403 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:54.337794 kubelet[1973]: E0212 19:18:54.337764 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:54.985584 systemd-networkd[1055]: lxc_health: Link UP Feb 12 19:18:54.990859 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:18:54.991292 systemd-networkd[1055]: lxc_health: Gained carrier Feb 12 19:18:56.338612 kubelet[1973]: E0212 19:18:56.338584 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:56.357515 kubelet[1973]: I0212 19:18:56.357476 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mhflc" podStartSLOduration=8.357439595 podCreationTimestamp="2024-02-12 19:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:18:53.032876189 +0000 UTC m=+86.330726347" watchObservedRunningTime="2024-02-12 19:18:56.357439595 +0000 UTC m=+89.655289713" Feb 12 19:18:56.644519 systemd[1]: run-containerd-runc-k8s.io-0613065077c3c0445aa1596f4943fe09c1c150811ad08b3c9a28f404857bcc61-runc.EjFdze.mount: Deactivated successfully. Feb 12 19:18:56.991967 systemd-networkd[1055]: lxc_health: Gained IPv6LL Feb 12 19:18:57.024612 kubelet[1973]: E0212 19:18:57.024514 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:18:59.802911 kubelet[1973]: E0212 19:18:59.802864 1973 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:19:00.979244 sshd[3732]: pam_unix(sshd:session): session closed for user core Feb 12 19:19:00.981588 systemd[1]: sshd@23-10.0.0.60:22-10.0.0.1:54594.service: Deactivated successfully. Feb 12 19:19:00.982334 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:19:00.982906 systemd-logind[1126]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:19:00.983681 systemd-logind[1126]: Removed session 24.