Feb 12 19:13:38.749899 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:13:38.749923 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:13:38.749931 kernel: efi: EFI v2.70 by EDK II Feb 12 19:13:38.749937 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 12 19:13:38.749942 kernel: random: crng init done Feb 12 19:13:38.749947 kernel: ACPI: Early table checksum verification disabled Feb 12 19:13:38.749954 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 12 19:13:38.749961 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:13:38.749966 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:13:38.749972 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:13:38.749977 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:13:38.749983 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:13:38.749988 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:13:38.749993 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:13:38.750001 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:13:38.750007 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:13:38.750013 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:13:38.750019 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 12 19:13:38.750025 kernel: NUMA: Failed to initialise from firmware Feb 12 19:13:38.750031 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:13:38.750037 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Feb 12 19:13:38.750042 kernel: Zone ranges: Feb 12 19:13:38.750049 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:13:38.750055 kernel: DMA32 empty Feb 12 19:13:38.750061 kernel: Normal empty Feb 12 19:13:38.750067 kernel: Movable zone start for each node Feb 12 19:13:38.750072 kernel: Early memory node ranges Feb 12 19:13:38.750078 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 12 19:13:38.750084 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 12 19:13:38.750090 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 12 19:13:38.750096 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 12 19:13:38.750102 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 12 19:13:38.750107 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 12 19:13:38.750113 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 12 19:13:38.750119 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:13:38.750126 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 12 19:13:38.750132 kernel: psci: probing for conduit method from ACPI. Feb 12 19:13:38.750137 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:13:38.750143 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:13:38.750149 kernel: psci: Trusted OS migration not required Feb 12 19:13:38.750158 kernel: psci: SMC Calling Convention v1.1 Feb 12 19:13:38.750164 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 12 19:13:38.750172 kernel: ACPI: SRAT not present Feb 12 19:13:38.750178 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:13:38.750184 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:13:38.750191 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 12 19:13:38.750197 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:13:38.750203 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:13:38.750209 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:13:38.750215 kernel: CPU features: detected: Spectre-v4 Feb 12 19:13:38.750221 kernel: CPU features: detected: Spectre-BHB Feb 12 19:13:38.750229 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:13:38.750236 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:13:38.750242 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:13:38.750248 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 12 19:13:38.750254 kernel: Policy zone: DMA Feb 12 19:13:38.750261 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:13:38.750268 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:13:38.750275 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:13:38.750281 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:13:38.750287 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:13:38.750294 kernel: Memory: 2459148K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113140K reserved, 0K cma-reserved) Feb 12 19:13:38.750301 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:13:38.750308 kernel: trace event string verifier disabled Feb 12 19:13:38.750314 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:13:38.750320 kernel: rcu: RCU event tracing is enabled. Feb 12 19:13:38.750326 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:13:38.750333 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:13:38.750339 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:13:38.750345 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:13:38.750352 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:13:38.750358 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:13:38.750364 kernel: GICv3: 256 SPIs implemented Feb 12 19:13:38.750371 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:13:38.750377 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:13:38.750383 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:13:38.750390 kernel: GICv3: 16 PPIs implemented Feb 12 19:13:38.750396 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 12 19:13:38.750402 kernel: ACPI: SRAT not present Feb 12 19:13:38.750408 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 12 19:13:38.750414 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 19:13:38.750420 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 12 19:13:38.750426 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 12 19:13:38.750433 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 12 19:13:38.750439 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:13:38.750446 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:13:38.750453 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:13:38.750460 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:13:38.750466 kernel: arm-pv: using stolen time PV Feb 12 19:13:38.750472 kernel: Console: colour dummy device 80x25 Feb 12 19:13:38.750479 kernel: ACPI: Core revision 20210730 Feb 12 19:13:38.750486 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:13:38.750492 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:13:38.750498 kernel: LSM: Security Framework initializing Feb 12 19:13:38.750517 kernel: SELinux: Initializing. Feb 12 19:13:38.750525 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:13:38.750531 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:13:38.750538 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:13:38.750544 kernel: Platform MSI: ITS@0x8080000 domain created Feb 12 19:13:38.750550 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 12 19:13:38.750556 kernel: Remapping and enabling EFI services. Feb 12 19:13:38.750563 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:13:38.750569 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:13:38.750575 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 12 19:13:38.750583 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 12 19:13:38.750589 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:13:38.750596 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:13:38.750602 kernel: Detected PIPT I-cache on CPU2 Feb 12 19:13:38.750609 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 12 19:13:38.750615 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 12 19:13:38.750622 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:13:38.750628 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 12 19:13:38.750634 kernel: Detected PIPT I-cache on CPU3 Feb 12 19:13:38.750640 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 12 19:13:38.750648 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 12 19:13:38.750654 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:13:38.750661 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 12 19:13:38.750667 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:13:38.750677 kernel: SMP: Total of 4 processors activated. Feb 12 19:13:38.750686 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:13:38.750692 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:13:38.750699 kernel: CPU features: detected: Common not Private translations Feb 12 19:13:38.750706 kernel: CPU features: detected: CRC32 instructions Feb 12 19:13:38.750713 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:13:38.750719 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:13:38.750726 kernel: CPU features: detected: Privileged Access Never Feb 12 19:13:38.750734 kernel: CPU features: detected: RAS Extension Support Feb 12 19:13:38.750741 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 12 19:13:38.750747 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:13:38.750760 kernel: alternatives: patching kernel code Feb 12 19:13:38.750767 kernel: devtmpfs: initialized Feb 12 19:13:38.750775 kernel: KASLR enabled Feb 12 19:13:38.750782 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:13:38.750788 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:13:38.750795 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:13:38.750802 kernel: SMBIOS 3.0.0 present. Feb 12 19:13:38.750808 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 12 19:13:38.750815 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:13:38.750821 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:13:38.750829 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:13:38.750837 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:13:38.750843 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:13:38.750850 kernel: audit: type=2000 audit(0.038:1): state=initialized audit_enabled=0 res=1 Feb 12 19:13:38.750857 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:13:38.750864 kernel: cpuidle: using governor menu Feb 12 19:13:38.750870 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:13:38.750881 kernel: ASID allocator initialised with 32768 entries Feb 12 19:13:38.750888 kernel: ACPI: bus type PCI registered Feb 12 19:13:38.750895 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:13:38.750903 kernel: Serial: AMBA PL011 UART driver Feb 12 19:13:38.750909 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:13:38.750916 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:13:38.750922 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:13:38.750929 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:13:38.750936 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:13:38.750943 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:13:38.750949 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:13:38.750956 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:13:38.750964 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:13:38.750971 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:13:38.750978 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:13:38.750985 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:13:38.750991 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:13:38.750998 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:13:38.751005 kernel: ACPI: Interpreter enabled Feb 12 19:13:38.751011 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:13:38.751018 kernel: ACPI: MCFG table detected, 1 entries Feb 12 19:13:38.751026 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:13:38.751032 kernel: printk: console [ttyAMA0] enabled Feb 12 19:13:38.751039 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:13:38.751173 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:13:38.751240 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 19:13:38.751301 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 19:13:38.751361 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 12 19:13:38.751423 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 12 19:13:38.751432 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 12 19:13:38.751439 kernel: PCI host bridge to bus 0000:00 Feb 12 19:13:38.751506 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 12 19:13:38.751562 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 19:13:38.751616 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 12 19:13:38.751669 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:13:38.751745 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 12 19:13:38.751827 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:13:38.751916 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 12 19:13:38.751983 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 12 19:13:38.752044 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:13:38.752106 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:13:38.752168 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 12 19:13:38.752233 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 12 19:13:38.752289 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 12 19:13:38.752345 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 19:13:38.752400 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 12 19:13:38.752409 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 19:13:38.752416 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 19:13:38.752422 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 19:13:38.752430 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 19:13:38.752437 kernel: iommu: Default domain type: Translated Feb 12 19:13:38.752444 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:13:38.752451 kernel: vgaarb: loaded Feb 12 19:13:38.752458 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:13:38.752464 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:13:38.752471 kernel: PTP clock support registered Feb 12 19:13:38.752478 kernel: Registered efivars operations Feb 12 19:13:38.752484 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:13:38.752491 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:13:38.752499 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:13:38.752506 kernel: pnp: PnP ACPI init Feb 12 19:13:38.752573 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 12 19:13:38.752583 kernel: pnp: PnP ACPI: found 1 devices Feb 12 19:13:38.752590 kernel: NET: Registered PF_INET protocol family Feb 12 19:13:38.752596 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:13:38.752603 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:13:38.752610 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:13:38.752619 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:13:38.752626 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:13:38.752633 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:13:38.752639 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:13:38.752646 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:13:38.752653 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:13:38.752659 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:13:38.752666 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 12 19:13:38.752673 kernel: kvm [1]: HYP mode not available Feb 12 19:13:38.752681 kernel: Initialise system trusted keyrings Feb 12 19:13:38.752688 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:13:38.752694 kernel: Key type asymmetric registered Feb 12 19:13:38.752701 kernel: Asymmetric key parser 'x509' registered Feb 12 19:13:38.752707 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:13:38.752714 kernel: io scheduler mq-deadline registered Feb 12 19:13:38.752725 kernel: io scheduler kyber registered Feb 12 19:13:38.752732 kernel: io scheduler bfq registered Feb 12 19:13:38.752739 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 19:13:38.752747 kernel: ACPI: button: Power Button [PWRB] Feb 12 19:13:38.752760 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 19:13:38.752825 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 12 19:13:38.752834 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:13:38.752841 kernel: thunder_xcv, ver 1.0 Feb 12 19:13:38.752847 kernel: thunder_bgx, ver 1.0 Feb 12 19:13:38.752854 kernel: nicpf, ver 1.0 Feb 12 19:13:38.752861 kernel: nicvf, ver 1.0 Feb 12 19:13:38.752960 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:13:38.753022 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:13:38 UTC (1707765218) Feb 12 19:13:38.753031 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:13:38.753038 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:13:38.753045 kernel: Segment Routing with IPv6 Feb 12 19:13:38.753051 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:13:38.753058 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:13:38.753065 kernel: Key type dns_resolver registered Feb 12 19:13:38.753071 kernel: registered taskstats version 1 Feb 12 19:13:38.753080 kernel: Loading compiled-in X.509 certificates Feb 12 19:13:38.753087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:13:38.753094 kernel: Key type .fscrypt registered Feb 12 19:13:38.753101 kernel: Key type fscrypt-provisioning registered Feb 12 19:13:38.753108 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:13:38.753114 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:13:38.753121 kernel: ima: No architecture policies found Feb 12 19:13:38.753128 kernel: Freeing unused kernel memory: 34688K Feb 12 19:13:38.753134 kernel: Run /init as init process Feb 12 19:13:38.753142 kernel: with arguments: Feb 12 19:13:38.753149 kernel: /init Feb 12 19:13:38.753155 kernel: with environment: Feb 12 19:13:38.753162 kernel: HOME=/ Feb 12 19:13:38.753168 kernel: TERM=linux Feb 12 19:13:38.753175 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:13:38.753183 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:13:38.753192 systemd[1]: Detected virtualization kvm. Feb 12 19:13:38.753201 systemd[1]: Detected architecture arm64. Feb 12 19:13:38.753209 systemd[1]: Running in initrd. Feb 12 19:13:38.753216 systemd[1]: No hostname configured, using default hostname. Feb 12 19:13:38.753223 systemd[1]: Hostname set to . Feb 12 19:13:38.753230 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:13:38.753237 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:13:38.753244 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:13:38.753251 systemd[1]: Reached target cryptsetup.target. Feb 12 19:13:38.753260 systemd[1]: Reached target paths.target. Feb 12 19:13:38.753267 systemd[1]: Reached target slices.target. Feb 12 19:13:38.753274 systemd[1]: Reached target swap.target. Feb 12 19:13:38.753282 systemd[1]: Reached target timers.target. Feb 12 19:13:38.753289 systemd[1]: Listening on iscsid.socket. Feb 12 19:13:38.753296 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:13:38.753304 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:13:38.753312 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:13:38.753320 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:13:38.753327 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:13:38.753334 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:13:38.753342 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:13:38.753349 systemd[1]: Reached target sockets.target. Feb 12 19:13:38.753356 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:13:38.753363 systemd[1]: Finished network-cleanup.service. Feb 12 19:13:38.753370 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:13:38.753379 systemd[1]: Starting systemd-journald.service... Feb 12 19:13:38.753386 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:13:38.753393 systemd[1]: Starting systemd-resolved.service... Feb 12 19:13:38.753401 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:13:38.753408 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:13:38.753415 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:13:38.753422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:13:38.753429 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:13:38.753437 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:13:38.753445 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:13:38.753456 systemd-journald[290]: Journal started Feb 12 19:13:38.753493 systemd-journald[290]: Runtime Journal (/run/log/journal/1b069c1de042452f8272c7ea2cd14329) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:13:38.735566 systemd-modules-load[291]: Inserted module 'overlay' Feb 12 19:13:38.754970 systemd[1]: Started systemd-journald.service. Feb 12 19:13:38.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.762905 kernel: audit: type=1130 audit(1707765218.754:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.768553 systemd-resolved[292]: Positive Trust Anchors: Feb 12 19:13:38.768567 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:13:38.768595 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:13:38.775790 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 12 19:13:38.777658 systemd[1]: Started systemd-resolved.service. Feb 12 19:13:38.780934 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:13:38.780955 kernel: audit: type=1130 audit(1707765218.778:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.778413 systemd[1]: Reached target nss-lookup.target. Feb 12 19:13:38.782522 kernel: Bridge firewalling registered Feb 12 19:13:38.781859 systemd-modules-load[291]: Inserted module 'br_netfilter' Feb 12 19:13:38.783597 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:13:38.788257 kernel: audit: type=1130 audit(1707765218.784:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.785246 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:13:38.793931 kernel: SCSI subsystem initialized Feb 12 19:13:38.795971 dracut-cmdline[308]: dracut-dracut-053 Feb 12 19:13:38.798955 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:13:38.804220 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:13:38.804253 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:13:38.805015 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:13:38.807320 systemd-modules-load[291]: Inserted module 'dm_multipath' Feb 12 19:13:38.808239 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:13:38.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.809763 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:13:38.814912 kernel: audit: type=1130 audit(1707765218.808:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.818451 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:13:38.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.821903 kernel: audit: type=1130 audit(1707765218.819:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.863897 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:13:38.876924 kernel: iscsi: registered transport (tcp) Feb 12 19:13:38.890895 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:13:38.890929 kernel: QLogic iSCSI HBA Driver Feb 12 19:13:38.925690 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:13:38.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.927415 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:13:38.929891 kernel: audit: type=1130 audit(1707765218.926:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:38.971906 kernel: raid6: neonx8 gen() 13712 MB/s Feb 12 19:13:38.988895 kernel: raid6: neonx8 xor() 5890 MB/s Feb 12 19:13:39.005900 kernel: raid6: neonx4 gen() 12518 MB/s Feb 12 19:13:39.022898 kernel: raid6: neonx4 xor() 10327 MB/s Feb 12 19:13:39.039896 kernel: raid6: neonx2 gen() 12130 MB/s Feb 12 19:13:39.056893 kernel: raid6: neonx2 xor() 10146 MB/s Feb 12 19:13:39.073896 kernel: raid6: neonx1 gen() 10311 MB/s Feb 12 19:13:39.090900 kernel: raid6: neonx1 xor() 8551 MB/s Feb 12 19:13:39.107894 kernel: raid6: int64x8 gen() 6156 MB/s Feb 12 19:13:39.124896 kernel: raid6: int64x8 xor() 3451 MB/s Feb 12 19:13:39.141894 kernel: raid6: int64x4 gen() 7100 MB/s Feb 12 19:13:39.158904 kernel: raid6: int64x4 xor() 3760 MB/s Feb 12 19:13:39.175904 kernel: raid6: int64x2 gen() 6002 MB/s Feb 12 19:13:39.192902 kernel: raid6: int64x2 xor() 3239 MB/s Feb 12 19:13:39.209896 kernel: raid6: int64x1 gen() 4920 MB/s Feb 12 19:13:39.227255 kernel: raid6: int64x1 xor() 2586 MB/s Feb 12 19:13:39.227268 kernel: raid6: using algorithm neonx8 gen() 13712 MB/s Feb 12 19:13:39.227276 kernel: raid6: .... xor() 5890 MB/s, rmw enabled Feb 12 19:13:39.227285 kernel: raid6: using neon recovery algorithm Feb 12 19:13:39.240948 kernel: xor: measuring software checksum speed Feb 12 19:13:39.240968 kernel: 8regs : 17293 MB/sec Feb 12 19:13:39.241895 kernel: 32regs : 20755 MB/sec Feb 12 19:13:39.243142 kernel: arm64_neon : 27882 MB/sec Feb 12 19:13:39.243153 kernel: xor: using function: arm64_neon (27882 MB/sec) Feb 12 19:13:39.304908 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:13:39.315130 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:13:39.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:39.317000 audit: BPF prog-id=7 op=LOAD Feb 12 19:13:39.318974 kernel: audit: type=1130 audit(1707765219.315:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:39.318999 kernel: audit: type=1334 audit(1707765219.317:9): prog-id=7 op=LOAD Feb 12 19:13:39.319016 kernel: audit: type=1334 audit(1707765219.318:10): prog-id=8 op=LOAD Feb 12 19:13:39.318000 audit: BPF prog-id=8 op=LOAD Feb 12 19:13:39.319456 systemd[1]: Starting systemd-udevd.service... Feb 12 19:13:39.334944 systemd-udevd[492]: Using default interface naming scheme 'v252'. Feb 12 19:13:39.338361 systemd[1]: Started systemd-udevd.service. Feb 12 19:13:39.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:39.339830 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:13:39.353875 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Feb 12 19:13:39.384842 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:13:39.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:39.386664 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:13:39.422705 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:13:39.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:39.465823 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:13:39.468302 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:13:39.468333 kernel: GPT:9289727 != 19775487 Feb 12 19:13:39.468346 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:13:39.468355 kernel: GPT:9289727 != 19775487 Feb 12 19:13:39.468974 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:13:39.468992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:13:39.488520 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:13:39.492900 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (551) Feb 12 19:13:39.494283 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:13:39.501973 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:13:39.502995 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:13:39.507238 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:13:39.508691 systemd[1]: Starting disk-uuid.service... Feb 12 19:13:39.514813 disk-uuid[563]: Primary Header is updated. Feb 12 19:13:39.514813 disk-uuid[563]: Secondary Entries is updated. Feb 12 19:13:39.514813 disk-uuid[563]: Secondary Header is updated. Feb 12 19:13:39.517906 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:13:40.529912 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:13:40.529980 disk-uuid[564]: The operation has completed successfully. Feb 12 19:13:40.557456 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:13:40.558587 systemd[1]: Finished disk-uuid.service. Feb 12 19:13:40.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.561004 systemd[1]: Starting verity-setup.service... Feb 12 19:13:40.579914 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:13:40.603114 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:13:40.605089 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:13:40.607948 systemd[1]: Finished verity-setup.service. Feb 12 19:13:40.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.655536 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:13:40.656920 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:13:40.656355 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:13:40.657126 systemd[1]: Starting ignition-setup.service... Feb 12 19:13:40.659317 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:13:40.666050 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:13:40.666094 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:13:40.666104 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:13:40.674415 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:13:40.682068 systemd[1]: Finished ignition-setup.service. Feb 12 19:13:40.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.683738 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:13:40.748322 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:13:40.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.748000 audit: BPF prog-id=9 op=LOAD Feb 12 19:13:40.750564 systemd[1]: Starting systemd-networkd.service... Feb 12 19:13:40.778392 systemd-networkd[740]: lo: Link UP Feb 12 19:13:40.778398 systemd-networkd[740]: lo: Gained carrier Feb 12 19:13:40.779383 systemd-networkd[740]: Enumeration completed Feb 12 19:13:40.779514 systemd[1]: Started systemd-networkd.service. Feb 12 19:13:40.780178 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:13:40.780265 systemd[1]: Reached target network.target. Feb 12 19:13:40.782228 systemd[1]: Starting iscsiuio.service... Feb 12 19:13:40.782945 systemd-networkd[740]: eth0: Link UP Feb 12 19:13:40.782949 systemd-networkd[740]: eth0: Gained carrier Feb 12 19:13:40.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.794433 systemd[1]: Started iscsiuio.service. Feb 12 19:13:40.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.796001 systemd[1]: Starting iscsid.service... Feb 12 19:13:40.799388 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:13:40.799388 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:13:40.799388 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:13:40.799388 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:13:40.799388 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:13:40.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.800774 ignition[655]: Ignition 2.14.0 Feb 12 19:13:40.809319 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:13:40.802717 systemd[1]: Started iscsid.service. Feb 12 19:13:40.800783 ignition[655]: Stage: fetch-offline Feb 12 19:13:40.806306 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:13:40.800827 ignition[655]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:13:40.810991 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:13:40.801055 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:13:40.801274 ignition[655]: parsed url from cmdline: "" Feb 12 19:13:40.801278 ignition[655]: no config URL provided Feb 12 19:13:40.801284 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:13:40.801293 ignition[655]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:13:40.801313 ignition[655]: op(1): [started] loading QEMU firmware config module Feb 12 19:13:40.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.816808 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:13:40.801317 ignition[655]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:13:40.817869 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:13:40.806252 ignition[655]: op(1): [finished] loading QEMU firmware config module Feb 12 19:13:40.819322 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:13:40.806283 ignition[655]: QEMU firmware config was not found. Ignoring... Feb 12 19:13:40.820802 systemd[1]: Reached target remote-fs.target. Feb 12 19:13:40.823006 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:13:40.830605 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:13:40.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.896971 ignition[655]: parsing config with SHA512: 292477e4ee104ae2539885b762ef423dc66cb5736c34655cafbe9c8364e11620902800d2123c59683729e28ddb9458dde59473b9059df4fd7f97d2e221d3fd77 Feb 12 19:13:40.937092 unknown[655]: fetched base config from "system" Feb 12 19:13:40.937105 unknown[655]: fetched user config from "qemu" Feb 12 19:13:40.937859 ignition[655]: fetch-offline: fetch-offline passed Feb 12 19:13:40.937952 ignition[655]: Ignition finished successfully Feb 12 19:13:40.939347 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:13:40.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.940197 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:13:40.940982 systemd[1]: Starting ignition-kargs.service... Feb 12 19:13:40.949552 ignition[762]: Ignition 2.14.0 Feb 12 19:13:40.949561 ignition[762]: Stage: kargs Feb 12 19:13:40.949664 ignition[762]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:13:40.949673 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:13:40.950828 ignition[762]: kargs: kargs passed Feb 12 19:13:40.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.952220 systemd[1]: Finished ignition-kargs.service. Feb 12 19:13:40.950891 ignition[762]: Ignition finished successfully Feb 12 19:13:40.954116 systemd[1]: Starting ignition-disks.service... Feb 12 19:13:40.960869 ignition[768]: Ignition 2.14.0 Feb 12 19:13:40.960896 ignition[768]: Stage: disks Feb 12 19:13:40.961000 ignition[768]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:13:40.961010 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:13:40.962143 ignition[768]: disks: disks passed Feb 12 19:13:40.962192 ignition[768]: Ignition finished successfully Feb 12 19:13:40.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.964109 systemd[1]: Finished ignition-disks.service. Feb 12 19:13:40.965234 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:13:40.966285 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:13:40.967346 systemd[1]: Reached target local-fs.target. Feb 12 19:13:40.968376 systemd[1]: Reached target sysinit.target. Feb 12 19:13:40.969349 systemd[1]: Reached target basic.target. Feb 12 19:13:40.971216 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:13:40.983143 systemd-fsck[776]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:13:40.987267 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:13:40.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:40.989192 systemd[1]: Mounting sysroot.mount... Feb 12 19:13:40.998898 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:13:40.998965 systemd[1]: Mounted sysroot.mount. Feb 12 19:13:40.999705 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:13:41.001850 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:13:41.002720 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:13:41.002769 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:13:41.002792 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:13:41.004819 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:13:41.007320 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:13:41.011965 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:13:41.016578 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:13:41.020900 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:13:41.023900 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:13:41.054575 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:13:41.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:41.056169 systemd[1]: Starting ignition-mount.service... Feb 12 19:13:41.057418 systemd[1]: Starting sysroot-boot.service... Feb 12 19:13:41.062212 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:13:41.071225 ignition[829]: INFO : Ignition 2.14.0 Feb 12 19:13:41.071225 ignition[829]: INFO : Stage: mount Feb 12 19:13:41.072939 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:13:41.072939 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:13:41.072939 ignition[829]: INFO : mount: mount passed Feb 12 19:13:41.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:41.074136 systemd[1]: Finished sysroot-boot.service. Feb 12 19:13:41.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:41.077327 ignition[829]: INFO : Ignition finished successfully Feb 12 19:13:41.075664 systemd[1]: Finished ignition-mount.service. Feb 12 19:13:41.614215 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:13:41.619893 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) Feb 12 19:13:41.621153 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:13:41.621167 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:13:41.621176 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:13:41.624307 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:13:41.625984 systemd[1]: Starting ignition-files.service... Feb 12 19:13:41.639607 ignition[857]: INFO : Ignition 2.14.0 Feb 12 19:13:41.639607 ignition[857]: INFO : Stage: files Feb 12 19:13:41.640817 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:13:41.640817 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:13:41.640817 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:13:41.643283 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:13:41.643283 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:13:41.645509 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:13:41.645509 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:13:41.647456 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:13:41.647456 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:13:41.647456 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 12 19:13:41.645990 unknown[857]: wrote ssh authorized keys file for user: core Feb 12 19:13:41.698810 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:13:41.740293 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:13:41.741859 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:13:41.741859 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:13:41.905036 systemd-networkd[740]: eth0: Gained IPv6LL Feb 12 19:13:42.051416 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:13:42.182430 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 12 19:13:42.184550 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:13:42.184550 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:13:42.184550 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 12 19:13:42.407766 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:13:42.658829 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 12 19:13:42.661300 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:13:42.661300 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:13:42.661300 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:13:42.661300 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:13:42.661300 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl: attempt #1 Feb 12 19:13:42.709088 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:13:42.984988 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 14be61ec35669a27acf2df0380afb85b9b42311d50ca1165718421c5f605df1119ec9ae314696a674051712e80deeaa65e62d2d62ed4d107fe99d0aaf419dafc Feb 12 19:13:42.987121 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:13:42.987121 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:13:42.987121 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:13:43.008424 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:13:43.600201 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 12 19:13:43.602431 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:13:43.602431 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:13:43.602431 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:13:43.624866 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 19:13:43.913304 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 12 19:13:43.913304 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:13:43.916758 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:13:43.916758 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 12 19:13:44.148630 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 19:13:44.192981 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:13:44.192981 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:13:44.195668 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:13:44.195668 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:13:44.195668 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:13:44.195668 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:13:44.195668 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:13:44.195668 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:13:44.195668 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:13:44.195668 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:13:44.195668 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:13:44.195668 ignition[857]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:13:44.195668 ignition[857]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:13:44.195668 ignition[857]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:13:44.195668 ignition[857]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:13:44.195668 ignition[857]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 12 19:13:44.195668 ignition[857]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:13:44.195668 ignition[857]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:13:44.195668 ignition[857]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:13:44.218458 ignition[857]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:13:44.251585 ignition[857]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:13:44.252911 ignition[857]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:13:44.252911 ignition[857]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:13:44.252911 ignition[857]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:13:44.252911 ignition[857]: INFO : files: files passed Feb 12 19:13:44.252911 ignition[857]: INFO : Ignition finished successfully Feb 12 19:13:44.263181 kernel: kauditd_printk_skb: 21 callbacks suppressed Feb 12 19:13:44.263205 kernel: audit: type=1130 audit(1707765224.253:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.252948 systemd[1]: Finished ignition-files.service. Feb 12 19:13:44.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.255659 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:13:44.267670 kernel: audit: type=1130 audit(1707765224.263:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.260685 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:13:44.269969 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:13:44.275267 kernel: audit: type=1130 audit(1707765224.270:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.275288 kernel: audit: type=1131 audit(1707765224.270:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.261456 systemd[1]: Starting ignition-quench.service... Feb 12 19:13:44.276061 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:13:44.262702 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:13:44.263969 systemd[1]: Reached target ignition-complete.target. Feb 12 19:13:44.267751 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:13:44.268615 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:13:44.268700 systemd[1]: Finished ignition-quench.service. Feb 12 19:13:44.282975 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:13:44.283077 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:13:44.288670 kernel: audit: type=1130 audit(1707765224.283:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.288701 kernel: audit: type=1131 audit(1707765224.283:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.284778 systemd[1]: Reached target initrd-fs.target. Feb 12 19:13:44.289466 systemd[1]: Reached target initrd.target. Feb 12 19:13:44.290687 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:13:44.291914 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:13:44.302085 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:13:44.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.303784 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:13:44.306520 kernel: audit: type=1130 audit(1707765224.302:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.312102 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:13:44.313075 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:13:44.314232 systemd[1]: Stopped target timers.target. Feb 12 19:13:44.315476 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:13:44.318892 kernel: audit: type=1131 audit(1707765224.315:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.315603 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:13:44.316719 systemd[1]: Stopped target initrd.target. Feb 12 19:13:44.319723 systemd[1]: Stopped target basic.target. Feb 12 19:13:44.320969 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:13:44.322199 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:13:44.323193 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:13:44.324337 systemd[1]: Stopped target remote-fs.target. Feb 12 19:13:44.325425 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:13:44.326704 systemd[1]: Stopped target sysinit.target. Feb 12 19:13:44.327674 systemd[1]: Stopped target local-fs.target. Feb 12 19:13:44.329068 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:13:44.330250 systemd[1]: Stopped target swap.target. Feb 12 19:13:44.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.331153 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:13:44.335676 kernel: audit: type=1131 audit(1707765224.332:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.331273 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:13:44.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.332448 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:13:44.339586 kernel: audit: type=1131 audit(1707765224.336:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.335165 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:13:44.335272 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:13:44.336384 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:13:44.336474 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:13:44.339292 systemd[1]: Stopped target paths.target. Feb 12 19:13:44.340289 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:13:44.343915 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:13:44.344937 systemd[1]: Stopped target slices.target. Feb 12 19:13:44.346107 systemd[1]: Stopped target sockets.target. Feb 12 19:13:44.346996 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:13:44.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.347109 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:13:44.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.348155 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:13:44.348249 systemd[1]: Stopped ignition-files.service. Feb 12 19:13:44.350363 systemd[1]: Stopping ignition-mount.service... Feb 12 19:13:44.352652 iscsid[745]: iscsid shutting down. Feb 12 19:13:44.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.351273 systemd[1]: Stopping iscsid.service... Feb 12 19:13:44.352097 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:13:44.352237 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:13:44.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.354246 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:13:44.355173 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:13:44.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.359641 ignition[897]: INFO : Ignition 2.14.0 Feb 12 19:13:44.359641 ignition[897]: INFO : Stage: umount Feb 12 19:13:44.359641 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:13:44.359641 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:13:44.359641 ignition[897]: INFO : umount: umount passed Feb 12 19:13:44.359641 ignition[897]: INFO : Ignition finished successfully Feb 12 19:13:44.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.355297 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:13:44.356813 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:13:44.356935 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:13:44.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.359633 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:13:44.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.359735 systemd[1]: Stopped iscsid.service. Feb 12 19:13:44.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.360828 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:13:44.360917 systemd[1]: Stopped ignition-mount.service. Feb 12 19:13:44.362179 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:13:44.362249 systemd[1]: Closed iscsid.socket. Feb 12 19:13:44.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.363020 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:13:44.363063 systemd[1]: Stopped ignition-disks.service. Feb 12 19:13:44.365002 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:13:44.365043 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:13:44.366621 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:13:44.366659 systemd[1]: Stopped ignition-setup.service. Feb 12 19:13:44.367963 systemd[1]: Stopping iscsiuio.service... Feb 12 19:13:44.369752 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:13:44.370183 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:13:44.370265 systemd[1]: Stopped iscsiuio.service. Feb 12 19:13:44.371125 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:13:44.371194 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:13:44.372477 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:13:44.372551 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:13:44.374369 systemd[1]: Stopped target network.target. Feb 12 19:13:44.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.376200 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:13:44.376233 systemd[1]: Closed iscsiuio.socket. Feb 12 19:13:44.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.377536 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:13:44.377598 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:13:44.379111 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:13:44.396000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:13:44.380914 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:13:44.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.390030 systemd-networkd[740]: eth0: DHCPv6 lease lost Feb 12 19:13:44.397000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:13:44.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.390990 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:13:44.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.391087 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:13:44.392654 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:13:44.392747 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:13:44.394477 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:13:44.394507 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:13:44.395841 systemd[1]: Stopping network-cleanup.service... Feb 12 19:13:44.396753 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:13:44.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.396808 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:13:44.398286 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:13:44.398328 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:13:44.399844 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:13:44.399898 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:13:44.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.400855 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:13:44.405409 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:13:44.407960 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:13:44.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.408051 systemd[1]: Stopped network-cleanup.service. Feb 12 19:13:44.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.411903 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:13:44.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.412059 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:13:44.413134 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:13:44.413166 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:13:44.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.414030 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:13:44.414058 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:13:44.415078 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:13:44.415118 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:13:44.416496 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:13:44.416535 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:13:44.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.417575 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:13:44.417614 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:13:44.419233 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:13:44.420272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:13:44.420330 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:13:44.424477 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:13:44.424563 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:13:44.426241 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:13:44.427846 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:13:44.434892 systemd[1]: Switching root. Feb 12 19:13:44.452259 systemd-journald[290]: Journal stopped Feb 12 19:13:46.558912 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Feb 12 19:13:46.558971 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:13:46.558983 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:13:46.558995 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:13:46.559004 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:13:46.559018 kernel: SELinux: policy capability open_perms=1 Feb 12 19:13:46.559028 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:13:46.559037 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:13:46.559050 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:13:46.559059 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:13:46.559068 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:13:46.559078 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:13:46.559089 systemd[1]: Successfully loaded SELinux policy in 34.346ms. Feb 12 19:13:46.559109 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.861ms. Feb 12 19:13:46.559120 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:13:46.559131 systemd[1]: Detected virtualization kvm. Feb 12 19:13:46.559142 systemd[1]: Detected architecture arm64. Feb 12 19:13:46.559153 systemd[1]: Detected first boot. Feb 12 19:13:46.559164 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:13:46.559177 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:13:46.559187 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:13:46.559197 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:13:46.559208 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:13:46.559219 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:13:46.559231 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:13:46.559241 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:13:46.559252 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:13:46.559263 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:13:46.559273 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:13:46.559284 systemd[1]: Created slice system-getty.slice. Feb 12 19:13:46.559294 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:13:46.559304 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:13:46.559315 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:13:46.559326 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:13:46.559337 systemd[1]: Created slice user.slice. Feb 12 19:13:46.559349 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:13:46.559359 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:13:46.559370 systemd[1]: Set up automount boot.automount. Feb 12 19:13:46.559381 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:13:46.559392 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:13:46.559403 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:13:46.559414 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:13:46.559425 systemd[1]: Reached target integritysetup.target. Feb 12 19:13:46.559435 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:13:46.559445 systemd[1]: Reached target remote-fs.target. Feb 12 19:13:46.559456 systemd[1]: Reached target slices.target. Feb 12 19:13:46.559467 systemd[1]: Reached target swap.target. Feb 12 19:13:46.559478 systemd[1]: Reached target torcx.target. Feb 12 19:13:46.559488 systemd[1]: Reached target veritysetup.target. Feb 12 19:13:46.559499 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:13:46.559510 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:13:46.559520 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:13:46.559531 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:13:46.559541 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:13:46.559552 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:13:46.559562 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:13:46.559572 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:13:46.559582 systemd[1]: Mounting media.mount... Feb 12 19:13:46.559592 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:13:46.559604 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:13:46.559615 systemd[1]: Mounting tmp.mount... Feb 12 19:13:46.559626 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:13:46.559636 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:13:46.559646 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:13:46.559657 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:13:46.559668 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:13:46.559678 systemd[1]: Starting modprobe@drm.service... Feb 12 19:13:46.559688 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:13:46.559698 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:13:46.559708 systemd[1]: Starting modprobe@loop.service... Feb 12 19:13:46.559720 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:13:46.559736 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:13:46.559747 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:13:46.559757 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:13:46.559767 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:13:46.559778 systemd[1]: Stopped systemd-journald.service. Feb 12 19:13:46.559787 kernel: fuse: init (API version 7.34) Feb 12 19:13:46.559797 systemd[1]: Starting systemd-journald.service... Feb 12 19:13:46.559808 kernel: loop: module loaded Feb 12 19:13:46.559821 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:13:46.559833 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:13:46.559845 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:13:46.559856 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:13:46.559867 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:13:46.559882 systemd[1]: Stopped verity-setup.service. Feb 12 19:13:46.559893 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:13:46.559903 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:13:46.559914 systemd[1]: Mounted media.mount. Feb 12 19:13:46.559924 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:13:46.559934 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:13:46.559949 systemd[1]: Mounted tmp.mount. Feb 12 19:13:46.559960 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:13:46.559970 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:13:46.559980 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:13:46.559991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:13:46.560001 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:13:46.560011 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:13:46.560024 systemd-journald[996]: Journal started Feb 12 19:13:46.560064 systemd-journald[996]: Runtime Journal (/run/log/journal/1b069c1de042452f8272c7ea2cd14329) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:13:44.515000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:13:44.706000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:13:44.706000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:13:44.707000 audit: BPF prog-id=10 op=LOAD Feb 12 19:13:44.707000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:13:44.707000 audit: BPF prog-id=11 op=LOAD Feb 12 19:13:44.707000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:13:44.745000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:13:44.745000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001bd8ac a1=400013ede0 a2=40001450c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:13:44.745000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:13:44.746000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:13:44.746000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001bd985 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:13:44.746000 audit: CWD cwd="/" Feb 12 19:13:44.746000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:13:44.746000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:13:44.746000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:13:46.434000 audit: BPF prog-id=12 op=LOAD Feb 12 19:13:46.434000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:13:46.434000 audit: BPF prog-id=13 op=LOAD Feb 12 19:13:46.434000 audit: BPF prog-id=14 op=LOAD Feb 12 19:13:46.434000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:13:46.434000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:13:46.435000 audit: BPF prog-id=15 op=LOAD Feb 12 19:13:46.435000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:13:46.435000 audit: BPF prog-id=16 op=LOAD Feb 12 19:13:46.435000 audit: BPF prog-id=17 op=LOAD Feb 12 19:13:46.435000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:13:46.435000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:13:46.436000 audit: BPF prog-id=18 op=LOAD Feb 12 19:13:46.436000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:13:46.436000 audit: BPF prog-id=19 op=LOAD Feb 12 19:13:46.436000 audit: BPF prog-id=20 op=LOAD Feb 12 19:13:46.436000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:13:46.436000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:13:46.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.447000 audit: BPF prog-id=18 op=UNLOAD Feb 12 19:13:46.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.526000 audit: BPF prog-id=21 op=LOAD Feb 12 19:13:46.526000 audit: BPF prog-id=22 op=LOAD Feb 12 19:13:46.526000 audit: BPF prog-id=23 op=LOAD Feb 12 19:13:46.526000 audit: BPF prog-id=19 op=UNLOAD Feb 12 19:13:46.526000 audit: BPF prog-id=20 op=UNLOAD Feb 12 19:13:46.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.561053 systemd[1]: Finished modprobe@drm.service. Feb 12 19:13:46.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.556000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:13:46.556000 audit[996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffcc3fb8a0 a2=4000 a3=1 items=0 ppid=1 pid=996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:13:46.556000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:13:46.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.744448 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:13:46.434223 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:13:44.744682 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:13:46.434236 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:13:44.744700 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:13:46.437860 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:13:44.744743 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:13:44.744752 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:13:44.744786 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:13:46.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:44.744798 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:13:44.745013 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:13:44.745048 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:13:44.745059 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:13:44.745417 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:13:44.745449 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:13:44.745466 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:13:44.745480 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:13:44.745496 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:13:44.745509 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:13:46.167408 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:46Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:13:46.167694 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:46Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:13:46.167800 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:46Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:13:46.167974 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:46Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:13:46.168023 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:46Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:13:46.168084 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-12T19:13:46Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:13:46.565158 systemd[1]: Started systemd-journald.service. Feb 12 19:13:46.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.564028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:13:46.564157 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:13:46.565321 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:13:46.565490 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:13:46.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.566600 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:13:46.566776 systemd[1]: Finished modprobe@loop.service. Feb 12 19:13:46.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.568025 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:13:46.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.569177 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:13:46.570386 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:13:46.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.571769 systemd[1]: Reached target network-pre.target. Feb 12 19:13:46.574214 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:13:46.576188 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:13:46.576978 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:13:46.579839 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:13:46.581824 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:13:46.582792 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:13:46.585813 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:13:46.586808 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:13:46.588028 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:13:46.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.591400 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:13:46.592453 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:13:46.593366 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:13:46.593581 systemd-journald[996]: Time spent on flushing to /var/log/journal/1b069c1de042452f8272c7ea2cd14329 is 14.597ms for 1034 entries. Feb 12 19:13:46.593581 systemd-journald[996]: System Journal (/var/log/journal/1b069c1de042452f8272c7ea2cd14329) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:13:46.616183 systemd-journald[996]: Received client request to flush runtime journal. Feb 12 19:13:46.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.596250 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:13:46.602570 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:13:46.603699 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:13:46.606516 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:13:46.608754 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:13:46.617243 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:13:46.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.618463 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:13:46.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.619623 udevadm[1031]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:13:46.628146 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:13:46.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.969273 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:13:46.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:46.969000 audit: BPF prog-id=24 op=LOAD Feb 12 19:13:46.969000 audit: BPF prog-id=25 op=LOAD Feb 12 19:13:46.969000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:13:46.969000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:13:46.971498 systemd[1]: Starting systemd-udevd.service... Feb 12 19:13:46.988834 systemd-udevd[1033]: Using default interface naming scheme 'v252'. Feb 12 19:13:47.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.001000 audit: BPF prog-id=26 op=LOAD Feb 12 19:13:47.000866 systemd[1]: Started systemd-udevd.service. Feb 12 19:13:47.003349 systemd[1]: Starting systemd-networkd.service... Feb 12 19:13:47.010000 audit: BPF prog-id=27 op=LOAD Feb 12 19:13:47.010000 audit: BPF prog-id=28 op=LOAD Feb 12 19:13:47.010000 audit: BPF prog-id=29 op=LOAD Feb 12 19:13:47.012116 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:13:47.024699 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 12 19:13:47.042439 systemd[1]: Started systemd-userdbd.service. Feb 12 19:13:47.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.063969 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:13:47.095709 systemd-networkd[1041]: lo: Link UP Feb 12 19:13:47.095984 systemd-networkd[1041]: lo: Gained carrier Feb 12 19:13:47.096386 systemd-networkd[1041]: Enumeration completed Feb 12 19:13:47.096595 systemd[1]: Started systemd-networkd.service. Feb 12 19:13:47.096595 systemd-networkd[1041]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:13:47.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.098303 systemd-networkd[1041]: eth0: Link UP Feb 12 19:13:47.098310 systemd-networkd[1041]: eth0: Gained carrier Feb 12 19:13:47.112275 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:13:47.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.114394 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:13:47.115201 systemd-networkd[1041]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:13:47.124568 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:13:47.154839 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:13:47.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.155864 systemd[1]: Reached target cryptsetup.target. Feb 12 19:13:47.157810 systemd[1]: Starting lvm2-activation.service... Feb 12 19:13:47.161553 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:13:47.189850 systemd[1]: Finished lvm2-activation.service. Feb 12 19:13:47.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.190843 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:13:47.191658 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:13:47.191689 systemd[1]: Reached target local-fs.target. Feb 12 19:13:47.192465 systemd[1]: Reached target machines.target. Feb 12 19:13:47.194527 systemd[1]: Starting ldconfig.service... Feb 12 19:13:47.195550 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:13:47.195608 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:13:47.196698 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:13:47.198562 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:13:47.200752 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:13:47.201763 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:13:47.201818 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:13:47.202957 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:13:47.205543 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) Feb 12 19:13:47.206654 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:13:47.211522 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:13:47.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.228873 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:13:47.230951 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:13:47.281362 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:13:47.308065 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) Feb 12 19:13:47.308065 systemd-fsck[1077]: /dev/vda1: 236 files, 113719/258078 clusters Feb 12 19:13:47.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.311236 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:13:47.314469 systemd[1]: Mounting boot.mount... Feb 12 19:13:47.338478 systemd[1]: Mounted boot.mount. Feb 12 19:13:47.351180 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:13:47.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.353826 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:13:47.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.427000 audit: BPF prog-id=30 op=LOAD Feb 12 19:13:47.417989 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:13:47.420393 systemd[1]: Starting audit-rules.service... Feb 12 19:13:47.422254 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:13:47.424387 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:13:47.429540 systemd[1]: Starting systemd-resolved.service... Feb 12 19:13:47.430000 audit: BPF prog-id=31 op=LOAD Feb 12 19:13:47.432685 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:13:47.437660 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:13:47.439129 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:13:47.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.440595 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:13:47.450000 audit[1095]: SYSTEM_BOOT pid=1095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.453975 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:13:47.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.456188 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:13:47.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.459044 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:13:47.464763 systemd[1]: Finished ldconfig.service. Feb 12 19:13:47.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:13:47.467076 systemd[1]: Starting systemd-update-done.service... Feb 12 19:13:47.471000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:13:47.471000 audit[1102]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcdf78ac0 a2=420 a3=0 items=0 ppid=1082 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:13:47.471000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:13:47.472365 augenrules[1102]: No rules Feb 12 19:13:47.474349 systemd[1]: Finished systemd-update-done.service. Feb 12 19:13:47.475503 systemd[1]: Finished audit-rules.service. Feb 12 19:13:47.484542 systemd-resolved[1091]: Positive Trust Anchors: Feb 12 19:13:47.484554 systemd-resolved[1091]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:13:47.484583 systemd-resolved[1091]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:13:47.493159 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:13:47.494025 systemd-timesyncd[1092]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:13:47.494086 systemd-timesyncd[1092]: Initial clock synchronization to Mon 2024-02-12 19:13:47.884472 UTC. Feb 12 19:13:47.494400 systemd[1]: Reached target time-set.target. Feb 12 19:13:47.498002 systemd-resolved[1091]: Defaulting to hostname 'linux'. Feb 12 19:13:47.499655 systemd[1]: Started systemd-resolved.service. Feb 12 19:13:47.500428 systemd[1]: Reached target network.target. Feb 12 19:13:47.501072 systemd[1]: Reached target nss-lookup.target. Feb 12 19:13:47.501686 systemd[1]: Reached target sysinit.target. Feb 12 19:13:47.502375 systemd[1]: Started motdgen.path. Feb 12 19:13:47.502970 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:13:47.503981 systemd[1]: Started logrotate.timer. Feb 12 19:13:47.504662 systemd[1]: Started mdadm.timer. Feb 12 19:13:47.505274 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:13:47.505931 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:13:47.505963 systemd[1]: Reached target paths.target. Feb 12 19:13:47.506505 systemd[1]: Reached target timers.target. Feb 12 19:13:47.507420 systemd[1]: Listening on dbus.socket. Feb 12 19:13:47.509138 systemd[1]: Starting docker.socket... Feb 12 19:13:47.513126 systemd[1]: Listening on sshd.socket. Feb 12 19:13:47.513942 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:13:47.514354 systemd[1]: Listening on docker.socket. Feb 12 19:13:47.515189 systemd[1]: Reached target sockets.target. Feb 12 19:13:47.515935 systemd[1]: Reached target basic.target. Feb 12 19:13:47.516684 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:13:47.516714 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:13:47.517963 systemd[1]: Starting containerd.service... Feb 12 19:13:47.519698 systemd[1]: Starting dbus.service... Feb 12 19:13:47.521406 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:13:47.523383 systemd[1]: Starting extend-filesystems.service... Feb 12 19:13:47.524277 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:13:47.525530 systemd[1]: Starting motdgen.service... Feb 12 19:13:47.530162 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:13:47.531834 systemd[1]: Starting prepare-critools.service... Feb 12 19:13:47.533829 jq[1113]: false Feb 12 19:13:47.533482 systemd[1]: Starting prepare-helm.service... Feb 12 19:13:47.537060 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:13:47.539010 systemd[1]: Starting sshd-keygen.service... Feb 12 19:13:47.542140 systemd[1]: Starting systemd-logind.service... Feb 12 19:13:47.543255 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:13:47.543323 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:13:47.543781 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:13:47.545750 systemd[1]: Starting update-engine.service... Feb 12 19:13:47.548449 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:13:47.550296 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:13:47.551649 jq[1133]: true Feb 12 19:13:47.552456 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:13:47.552620 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:13:47.556792 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:13:47.557085 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:13:47.565057 jq[1139]: true Feb 12 19:13:47.565779 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:13:47.565975 systemd[1]: Finished motdgen.service. Feb 12 19:13:47.567557 tar[1135]: ./ Feb 12 19:13:47.569022 tar[1138]: linux-arm64/helm Feb 12 19:13:47.569227 tar[1136]: crictl Feb 12 19:13:47.573302 tar[1135]: ./loopback Feb 12 19:13:47.580984 extend-filesystems[1114]: Found vda Feb 12 19:13:47.581930 extend-filesystems[1114]: Found vda1 Feb 12 19:13:47.581930 extend-filesystems[1114]: Found vda2 Feb 12 19:13:47.581930 extend-filesystems[1114]: Found vda3 Feb 12 19:13:47.581930 extend-filesystems[1114]: Found usr Feb 12 19:13:47.581930 extend-filesystems[1114]: Found vda4 Feb 12 19:13:47.581930 extend-filesystems[1114]: Found vda6 Feb 12 19:13:47.581930 extend-filesystems[1114]: Found vda7 Feb 12 19:13:47.581930 extend-filesystems[1114]: Found vda9 Feb 12 19:13:47.581930 extend-filesystems[1114]: Checking size of /dev/vda9 Feb 12 19:13:47.592217 dbus-daemon[1112]: [system] SELinux support is enabled Feb 12 19:13:47.592364 systemd[1]: Started dbus.service. Feb 12 19:13:47.594915 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:13:47.594940 systemd[1]: Reached target system-config.target. Feb 12 19:13:47.595684 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:13:47.595699 systemd[1]: Reached target user-config.target. Feb 12 19:13:47.596197 systemd-logind[1129]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 19:13:47.600752 systemd-logind[1129]: New seat seat0. Feb 12 19:13:47.604667 extend-filesystems[1114]: Resized partition /dev/vda9 Feb 12 19:13:47.615705 systemd[1]: Started systemd-logind.service. Feb 12 19:13:47.620863 extend-filesystems[1164]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:13:47.625062 bash[1159]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:13:47.625984 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:13:47.650906 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:13:47.657382 update_engine[1131]: I0212 19:13:47.655349 1131 main.cc:92] Flatcar Update Engine starting Feb 12 19:13:47.660376 tar[1135]: ./bandwidth Feb 12 19:13:47.665487 systemd[1]: Started update-engine.service. Feb 12 19:13:47.665612 update_engine[1131]: I0212 19:13:47.665500 1131 update_check_scheduler.cc:74] Next update check in 5m44s Feb 12 19:13:47.668043 systemd[1]: Started locksmithd.service. Feb 12 19:13:47.672929 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:13:47.684943 extend-filesystems[1164]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:13:47.684943 extend-filesystems[1164]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:13:47.684943 extend-filesystems[1164]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:13:47.689270 extend-filesystems[1114]: Resized filesystem in /dev/vda9 Feb 12 19:13:47.685681 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:13:47.685854 systemd[1]: Finished extend-filesystems.service. Feb 12 19:13:47.724524 tar[1135]: ./ptp Feb 12 19:13:47.739412 env[1140]: time="2024-02-12T19:13:47.739314520Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:13:47.764670 env[1140]: time="2024-02-12T19:13:47.764612680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:13:47.764806 env[1140]: time="2024-02-12T19:13:47.764791440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:13:47.767836 env[1140]: time="2024-02-12T19:13:47.767759200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:13:47.767836 env[1140]: time="2024-02-12T19:13:47.767791200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:13:47.768042 env[1140]: time="2024-02-12T19:13:47.768014600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:13:47.768042 env[1140]: time="2024-02-12T19:13:47.768039200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:13:47.768116 env[1140]: time="2024-02-12T19:13:47.768053680Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:13:47.768116 env[1140]: time="2024-02-12T19:13:47.768064400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:13:47.768157 env[1140]: time="2024-02-12T19:13:47.768138000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:13:47.768423 env[1140]: time="2024-02-12T19:13:47.768399160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:13:47.768531 env[1140]: time="2024-02-12T19:13:47.768508160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:13:47.768531 env[1140]: time="2024-02-12T19:13:47.768527240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:13:47.768598 env[1140]: time="2024-02-12T19:13:47.768579240Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:13:47.768598 env[1140]: time="2024-02-12T19:13:47.768596000Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:13:47.771989 env[1140]: time="2024-02-12T19:13:47.771956880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:13:47.771989 env[1140]: time="2024-02-12T19:13:47.771990920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:13:47.772090 env[1140]: time="2024-02-12T19:13:47.772004320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:13:47.772090 env[1140]: time="2024-02-12T19:13:47.772031840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:13:47.772090 env[1140]: time="2024-02-12T19:13:47.772052880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:13:47.772090 env[1140]: time="2024-02-12T19:13:47.772068200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:13:47.772090 env[1140]: time="2024-02-12T19:13:47.772080720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:13:47.772439 env[1140]: time="2024-02-12T19:13:47.772419400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:13:47.772479 env[1140]: time="2024-02-12T19:13:47.772440480Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:13:47.772479 env[1140]: time="2024-02-12T19:13:47.772459480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:13:47.772479 env[1140]: time="2024-02-12T19:13:47.772473160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:13:47.772546 env[1140]: time="2024-02-12T19:13:47.772484960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:13:47.772628 env[1140]: time="2024-02-12T19:13:47.772606720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:13:47.772727 env[1140]: time="2024-02-12T19:13:47.772687080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:13:47.772983 env[1140]: time="2024-02-12T19:13:47.772946960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:13:47.772983 env[1140]: time="2024-02-12T19:13:47.772978680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773047 env[1140]: time="2024-02-12T19:13:47.772991960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:13:47.773151 env[1140]: time="2024-02-12T19:13:47.773094560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773151 env[1140]: time="2024-02-12T19:13:47.773109880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773151 env[1140]: time="2024-02-12T19:13:47.773121960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773151 env[1140]: time="2024-02-12T19:13:47.773133920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773151 env[1140]: time="2024-02-12T19:13:47.773146080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773260 env[1140]: time="2024-02-12T19:13:47.773157920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773260 env[1140]: time="2024-02-12T19:13:47.773170480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773260 env[1140]: time="2024-02-12T19:13:47.773181880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773260 env[1140]: time="2024-02-12T19:13:47.773194000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:13:47.773336 env[1140]: time="2024-02-12T19:13:47.773311880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773336 env[1140]: time="2024-02-12T19:13:47.773327200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773375 env[1140]: time="2024-02-12T19:13:47.773339480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773375 env[1140]: time="2024-02-12T19:13:47.773352160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:13:47.773375 env[1140]: time="2024-02-12T19:13:47.773367040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:13:47.773443 env[1140]: time="2024-02-12T19:13:47.773377440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:13:47.773443 env[1140]: time="2024-02-12T19:13:47.773394080Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:13:47.773443 env[1140]: time="2024-02-12T19:13:47.773426960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:13:47.773665 env[1140]: time="2024-02-12T19:13:47.773605600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:13:47.773665 env[1140]: time="2024-02-12T19:13:47.773664840Z" level=info msg="Connect containerd service" Feb 12 19:13:47.776170 env[1140]: time="2024-02-12T19:13:47.773696200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:13:47.776440 tar[1135]: ./vlan Feb 12 19:13:47.786114 env[1140]: time="2024-02-12T19:13:47.786071360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:13:47.786307 env[1140]: time="2024-02-12T19:13:47.786274160Z" level=info msg="Start subscribing containerd event" Feb 12 19:13:47.786358 env[1140]: time="2024-02-12T19:13:47.786324480Z" level=info msg="Start recovering state" Feb 12 19:13:47.786407 env[1140]: time="2024-02-12T19:13:47.786390080Z" level=info msg="Start event monitor" Feb 12 19:13:47.786438 env[1140]: time="2024-02-12T19:13:47.786416680Z" level=info msg="Start snapshots syncer" Feb 12 19:13:47.786438 env[1140]: time="2024-02-12T19:13:47.786426800Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:13:47.786438 env[1140]: time="2024-02-12T19:13:47.786434200Z" level=info msg="Start streaming server" Feb 12 19:13:47.786794 env[1140]: time="2024-02-12T19:13:47.786772360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:13:47.786845 env[1140]: time="2024-02-12T19:13:47.786828880Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:13:47.786925 env[1140]: time="2024-02-12T19:13:47.786876080Z" level=info msg="containerd successfully booted in 0.048663s" Feb 12 19:13:47.786982 systemd[1]: Started containerd.service. Feb 12 19:13:47.812634 tar[1135]: ./host-device Feb 12 19:13:47.849606 tar[1135]: ./tuning Feb 12 19:13:47.883193 tar[1135]: ./vrf Feb 12 19:13:47.916479 tar[1135]: ./sbr Feb 12 19:13:47.947565 tar[1135]: ./tap Feb 12 19:13:47.986403 tar[1135]: ./dhcp Feb 12 19:13:48.007097 tar[1138]: linux-arm64/LICENSE Feb 12 19:13:48.007207 tar[1138]: linux-arm64/README.md Feb 12 19:13:48.011989 systemd[1]: Finished prepare-helm.service. Feb 12 19:13:48.028910 locksmithd[1170]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:13:48.083529 tar[1135]: ./static Feb 12 19:13:48.103416 systemd[1]: Finished prepare-critools.service. Feb 12 19:13:48.106117 tar[1135]: ./firewall Feb 12 19:13:48.139730 tar[1135]: ./macvlan Feb 12 19:13:48.170373 tar[1135]: ./dummy Feb 12 19:13:48.200349 tar[1135]: ./bridge Feb 12 19:13:48.233358 tar[1135]: ./ipvlan Feb 12 19:13:48.263677 tar[1135]: ./portmap Feb 12 19:13:48.292557 tar[1135]: ./host-local Feb 12 19:13:48.327990 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:13:48.369345 systemd-networkd[1041]: eth0: Gained IPv6LL Feb 12 19:13:48.858527 sshd_keygen[1137]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:13:48.879292 systemd[1]: Finished sshd-keygen.service. Feb 12 19:13:48.881702 systemd[1]: Starting issuegen.service... Feb 12 19:13:48.886752 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:13:48.886948 systemd[1]: Finished issuegen.service. Feb 12 19:13:48.889450 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:13:48.896205 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:13:48.898714 systemd[1]: Started getty@tty1.service. Feb 12 19:13:48.901043 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:13:48.902157 systemd[1]: Reached target getty.target. Feb 12 19:13:48.902816 systemd[1]: Reached target multi-user.target. Feb 12 19:13:48.904739 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:13:48.912829 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:13:48.913014 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:13:48.914170 systemd[1]: Startup finished in 612ms (kernel) + 5.896s (initrd) + 4.435s (userspace) = 10.944s. Feb 12 19:13:51.565847 systemd[1]: Created slice system-sshd.slice. Feb 12 19:13:51.567074 systemd[1]: Started sshd@0-10.0.0.42:22-10.0.0.1:48914.service. Feb 12 19:13:51.626183 sshd[1200]: Accepted publickey for core from 10.0.0.1 port 48914 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:13:51.628799 sshd[1200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:13:51.637836 systemd[1]: Created slice user-500.slice. Feb 12 19:13:51.639185 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:13:51.644449 systemd-logind[1129]: New session 1 of user core. Feb 12 19:13:51.657969 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:13:51.659621 systemd[1]: Starting user@500.service... Feb 12 19:13:51.665272 (systemd)[1203]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:13:51.738388 systemd[1203]: Queued start job for default target default.target. Feb 12 19:13:51.738954 systemd[1203]: Reached target paths.target. Feb 12 19:13:51.738975 systemd[1203]: Reached target sockets.target. Feb 12 19:13:51.738988 systemd[1203]: Reached target timers.target. Feb 12 19:13:51.738999 systemd[1203]: Reached target basic.target. Feb 12 19:13:51.739112 systemd[1]: Started user@500.service. Feb 12 19:13:51.739658 systemd[1203]: Reached target default.target. Feb 12 19:13:51.739715 systemd[1203]: Startup finished in 65ms. Feb 12 19:13:51.740138 systemd[1]: Started session-1.scope. Feb 12 19:13:51.793483 systemd[1]: Started sshd@1-10.0.0.42:22-10.0.0.1:48922.service. Feb 12 19:13:51.839698 sshd[1212]: Accepted publickey for core from 10.0.0.1 port 48922 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:13:51.844467 sshd[1212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:13:51.848240 systemd-logind[1129]: New session 2 of user core. Feb 12 19:13:51.849136 systemd[1]: Started session-2.scope. Feb 12 19:13:51.910031 sshd[1212]: pam_unix(sshd:session): session closed for user core Feb 12 19:13:51.912802 systemd[1]: sshd@1-10.0.0.42:22-10.0.0.1:48922.service: Deactivated successfully. Feb 12 19:13:51.913500 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:13:51.914362 systemd-logind[1129]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:13:51.915674 systemd[1]: Started sshd@2-10.0.0.42:22-10.0.0.1:48934.service. Feb 12 19:13:51.920556 systemd-logind[1129]: Removed session 2. Feb 12 19:13:51.954765 sshd[1218]: Accepted publickey for core from 10.0.0.1 port 48934 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:13:51.956084 sshd[1218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:13:51.960798 systemd-logind[1129]: New session 3 of user core. Feb 12 19:13:51.961069 systemd[1]: Started session-3.scope. Feb 12 19:13:52.016663 sshd[1218]: pam_unix(sshd:session): session closed for user core Feb 12 19:13:52.019317 systemd[1]: sshd@2-10.0.0.42:22-10.0.0.1:48934.service: Deactivated successfully. Feb 12 19:13:52.019895 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:13:52.020673 systemd-logind[1129]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:13:52.021670 systemd[1]: Started sshd@3-10.0.0.42:22-10.0.0.1:48942.service. Feb 12 19:13:52.022807 systemd-logind[1129]: Removed session 3. Feb 12 19:13:52.058005 sshd[1225]: Accepted publickey for core from 10.0.0.1 port 48942 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:13:52.059432 sshd[1225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:13:52.064267 systemd-logind[1129]: New session 4 of user core. Feb 12 19:13:52.065977 systemd[1]: Started session-4.scope. Feb 12 19:13:52.125998 sshd[1225]: pam_unix(sshd:session): session closed for user core Feb 12 19:13:52.131852 systemd[1]: sshd@3-10.0.0.42:22-10.0.0.1:48942.service: Deactivated successfully. Feb 12 19:13:52.132592 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:13:52.133991 systemd-logind[1129]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:13:52.142608 systemd[1]: Started sshd@4-10.0.0.42:22-10.0.0.1:48944.service. Feb 12 19:13:52.144948 systemd-logind[1129]: Removed session 4. Feb 12 19:13:52.180776 sshd[1231]: Accepted publickey for core from 10.0.0.1 port 48944 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:13:52.182700 sshd[1231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:13:52.187277 systemd[1]: Started session-5.scope. Feb 12 19:13:52.187727 systemd-logind[1129]: New session 5 of user core. Feb 12 19:13:52.250030 sudo[1234]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:13:52.250263 sudo[1234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:13:52.854119 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:13:52.876674 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:13:52.877435 systemd[1]: Reached target network-online.target. Feb 12 19:13:52.879695 systemd[1]: Starting docker.service... Feb 12 19:13:53.009043 env[1252]: time="2024-02-12T19:13:53.008978548Z" level=info msg="Starting up" Feb 12 19:13:53.010642 env[1252]: time="2024-02-12T19:13:53.010608483Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:13:53.010642 env[1252]: time="2024-02-12T19:13:53.010632099Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:13:53.010744 env[1252]: time="2024-02-12T19:13:53.010657561Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:13:53.010744 env[1252]: time="2024-02-12T19:13:53.010669001Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:13:53.012922 env[1252]: time="2024-02-12T19:13:53.012876525Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:13:53.012996 env[1252]: time="2024-02-12T19:13:53.012929417Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:13:53.012996 env[1252]: time="2024-02-12T19:13:53.012949549Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:13:53.012996 env[1252]: time="2024-02-12T19:13:53.012959184Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:13:53.201117 env[1252]: time="2024-02-12T19:13:53.201011114Z" level=info msg="Loading containers: start." Feb 12 19:13:53.298922 kernel: Initializing XFRM netlink socket Feb 12 19:13:53.322945 env[1252]: time="2024-02-12T19:13:53.322889357Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:13:53.384743 systemd-networkd[1041]: docker0: Link UP Feb 12 19:13:53.393728 env[1252]: time="2024-02-12T19:13:53.393667479Z" level=info msg="Loading containers: done." Feb 12 19:13:53.416999 env[1252]: time="2024-02-12T19:13:53.416944966Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:13:53.417185 env[1252]: time="2024-02-12T19:13:53.417164078Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:13:53.417310 env[1252]: time="2024-02-12T19:13:53.417272199Z" level=info msg="Daemon has completed initialization" Feb 12 19:13:53.441129 systemd[1]: Started docker.service. Feb 12 19:13:53.445533 env[1252]: time="2024-02-12T19:13:53.445480616Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:13:53.463093 systemd[1]: Reloading. Feb 12 19:13:53.512150 /usr/lib/systemd/system-generators/torcx-generator[1394]: time="2024-02-12T19:13:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:13:53.512178 /usr/lib/systemd/system-generators/torcx-generator[1394]: time="2024-02-12T19:13:53Z" level=info msg="torcx already run" Feb 12 19:13:53.574219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:13:53.574242 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:13:53.591394 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:13:53.662241 systemd[1]: Started kubelet.service. Feb 12 19:13:53.797561 kubelet[1431]: E0212 19:13:53.797417 1431 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:13:53.799650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:13:53.799781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:13:54.101816 env[1140]: time="2024-02-12T19:13:54.101703801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 12 19:13:54.819624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1052170441.mount: Deactivated successfully. Feb 12 19:13:56.994065 env[1140]: time="2024-02-12T19:13:56.994002921Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:13:56.995481 env[1140]: time="2024-02-12T19:13:56.995439013Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:13:56.997789 env[1140]: time="2024-02-12T19:13:56.997747741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:13:56.999393 env[1140]: time="2024-02-12T19:13:56.999349730Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:13:57.000991 env[1140]: time="2024-02-12T19:13:57.000943703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa\"" Feb 12 19:13:57.013830 env[1140]: time="2024-02-12T19:13:57.013785257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 12 19:13:59.273782 env[1140]: time="2024-02-12T19:13:59.273730744Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:13:59.275047 env[1140]: time="2024-02-12T19:13:59.275020598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:13:59.277880 env[1140]: time="2024-02-12T19:13:59.277849919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:13:59.279358 env[1140]: time="2024-02-12T19:13:59.279304927Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:13:59.280782 env[1140]: time="2024-02-12T19:13:59.280745132Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95\"" Feb 12 19:13:59.292114 env[1140]: time="2024-02-12T19:13:59.292073660Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 12 19:14:00.761974 env[1140]: time="2024-02-12T19:14:00.761916550Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:00.763285 env[1140]: time="2024-02-12T19:14:00.763254134Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:00.764931 env[1140]: time="2024-02-12T19:14:00.764899919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:00.766483 env[1140]: time="2024-02-12T19:14:00.766458454Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:00.767329 env[1140]: time="2024-02-12T19:14:00.767296534Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb\"" Feb 12 19:14:00.776778 env[1140]: time="2024-02-12T19:14:00.776736006Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 12 19:14:01.969557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206027795.mount: Deactivated successfully. Feb 12 19:14:02.407469 env[1140]: time="2024-02-12T19:14:02.407346505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:02.409069 env[1140]: time="2024-02-12T19:14:02.409021057Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:02.410761 env[1140]: time="2024-02-12T19:14:02.410709392Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:02.412136 env[1140]: time="2024-02-12T19:14:02.412102360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:02.412610 env[1140]: time="2024-02-12T19:14:02.412582306Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 12 19:14:02.421691 env[1140]: time="2024-02-12T19:14:02.421658845Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:14:02.949455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666835690.mount: Deactivated successfully. Feb 12 19:14:02.953471 env[1140]: time="2024-02-12T19:14:02.953429149Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:02.954823 env[1140]: time="2024-02-12T19:14:02.954783146Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:02.956547 env[1140]: time="2024-02-12T19:14:02.956502956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:02.959025 env[1140]: time="2024-02-12T19:14:02.958978130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:02.959520 env[1140]: time="2024-02-12T19:14:02.959473955Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 12 19:14:02.969688 env[1140]: time="2024-02-12T19:14:02.969648700Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 12 19:14:03.781421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092318501.mount: Deactivated successfully. Feb 12 19:14:04.050566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:14:04.050734 systemd[1]: Stopped kubelet.service. Feb 12 19:14:04.052236 systemd[1]: Started kubelet.service. Feb 12 19:14:04.096515 kubelet[1485]: E0212 19:14:04.096451 1485 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:14:04.099445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:14:04.099583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:14:05.744609 env[1140]: time="2024-02-12T19:14:05.744559441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:05.746242 env[1140]: time="2024-02-12T19:14:05.746201923Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:05.748018 env[1140]: time="2024-02-12T19:14:05.747985473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:05.749463 env[1140]: time="2024-02-12T19:14:05.749432092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:05.750342 env[1140]: time="2024-02-12T19:14:05.750304751Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737\"" Feb 12 19:14:05.759390 env[1140]: time="2024-02-12T19:14:05.759346804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 12 19:14:06.816012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3461040469.mount: Deactivated successfully. Feb 12 19:14:07.539164 env[1140]: time="2024-02-12T19:14:07.539081933Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:07.542725 env[1140]: time="2024-02-12T19:14:07.542677643Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:07.545314 env[1140]: time="2024-02-12T19:14:07.545264313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:07.547806 env[1140]: time="2024-02-12T19:14:07.547764369Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:07.548566 env[1140]: time="2024-02-12T19:14:07.548529472Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 12 19:14:12.387919 systemd[1]: Stopped kubelet.service. Feb 12 19:14:12.402402 systemd[1]: Reloading. Feb 12 19:14:12.456963 /usr/lib/systemd/system-generators/torcx-generator[1596]: time="2024-02-12T19:14:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:14:12.457341 /usr/lib/systemd/system-generators/torcx-generator[1596]: time="2024-02-12T19:14:12Z" level=info msg="torcx already run" Feb 12 19:14:12.511295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:14:12.511312 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:14:12.526652 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:14:12.595127 systemd[1]: Started kubelet.service. Feb 12 19:14:12.643758 kubelet[1633]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:14:12.643758 kubelet[1633]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:14:12.643758 kubelet[1633]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:14:12.643758 kubelet[1633]: I0212 19:14:12.643723 1633 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:14:13.209204 kubelet[1633]: I0212 19:14:13.209170 1633 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 19:14:13.209383 kubelet[1633]: I0212 19:14:13.209372 1633 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:14:13.209653 kubelet[1633]: I0212 19:14:13.209637 1633 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 19:14:13.217266 kubelet[1633]: I0212 19:14:13.217209 1633 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:14:13.217266 kubelet[1633]: E0212 19:14:13.217239 1633 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:13.219217 kubelet[1633]: W0212 19:14:13.219191 1633 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:14:13.221414 kubelet[1633]: I0212 19:14:13.221382 1633 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:14:13.221608 kubelet[1633]: I0212 19:14:13.221595 1633 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:14:13.221678 kubelet[1633]: I0212 19:14:13.221667 1633 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:14:13.221758 kubelet[1633]: I0212 19:14:13.221691 1633 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:14:13.221758 kubelet[1633]: I0212 19:14:13.221703 1633 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 19:14:13.221821 kubelet[1633]: I0212 19:14:13.221807 1633 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:14:13.228464 kubelet[1633]: I0212 19:14:13.228430 1633 kubelet.go:405] "Attempting to sync node with API server" Feb 12 19:14:13.228648 kubelet[1633]: I0212 19:14:13.228631 1633 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:14:13.228770 kubelet[1633]: I0212 19:14:13.228759 1633 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:14:13.228843 kubelet[1633]: I0212 19:14:13.228832 1633 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:14:13.229195 kubelet[1633]: W0212 19:14:13.229074 1633 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:13.229195 kubelet[1633]: E0212 19:14:13.229149 1633 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:13.229474 kubelet[1633]: W0212 19:14:13.229383 1633 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:13.229474 kubelet[1633]: E0212 19:14:13.229424 1633 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:13.229990 kubelet[1633]: I0212 19:14:13.229971 1633 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:14:13.230427 kubelet[1633]: W0212 19:14:13.230392 1633 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:14:13.231163 kubelet[1633]: I0212 19:14:13.231139 1633 server.go:1168] "Started kubelet" Feb 12 19:14:13.231463 kubelet[1633]: I0212 19:14:13.231443 1633 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:14:13.231680 kubelet[1633]: I0212 19:14:13.231650 1633 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:14:13.232239 kubelet[1633]: E0212 19:14:13.232129 1633 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333731e984e62", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 14, 13, 231103586, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 14, 13, 231103586, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.42:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.42:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:14:13.232446 kubelet[1633]: I0212 19:14:13.232427 1633 server.go:461] "Adding debug handlers to kubelet server" Feb 12 19:14:13.233918 kubelet[1633]: E0212 19:14:13.233871 1633 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:14:13.234033 kubelet[1633]: E0212 19:14:13.234018 1633 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:14:13.234900 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:14:13.235030 kubelet[1633]: I0212 19:14:13.235005 1633 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:14:13.235293 kubelet[1633]: I0212 19:14:13.235270 1633 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 19:14:13.235753 kubelet[1633]: E0212 19:14:13.235732 1633 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:14:13.236213 kubelet[1633]: E0212 19:14:13.236191 1633 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="200ms" Feb 12 19:14:13.236774 kubelet[1633]: I0212 19:14:13.236754 1633 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 19:14:13.236924 kubelet[1633]: W0212 19:14:13.236773 1633 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:13.237015 kubelet[1633]: E0212 19:14:13.237001 1633 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:13.255036 kubelet[1633]: I0212 19:14:13.255009 1633 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:14:13.256302 kubelet[1633]: I0212 19:14:13.256281 1633 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:14:13.256419 kubelet[1633]: I0212 19:14:13.256408 1633 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 19:14:13.256503 kubelet[1633]: I0212 19:14:13.256492 1633 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 19:14:13.257587 kubelet[1633]: E0212 19:14:13.257561 1633 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:14:13.259088 kubelet[1633]: W0212 19:14:13.258554 1633 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:13.259088 kubelet[1633]: E0212 19:14:13.258625 1633 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:13.260721 kubelet[1633]: I0212 19:14:13.260690 1633 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:14:13.260832 kubelet[1633]: I0212 19:14:13.260819 1633 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:14:13.260941 kubelet[1633]: I0212 19:14:13.260929 1633 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:14:13.265346 kubelet[1633]: I0212 19:14:13.265312 1633 policy_none.go:49] "None policy: Start" Feb 12 19:14:13.266403 kubelet[1633]: I0212 19:14:13.266385 1633 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:14:13.266589 kubelet[1633]: I0212 19:14:13.266558 1633 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:14:13.272060 systemd[1]: Created slice kubepods.slice. Feb 12 19:14:13.277023 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:14:13.279251 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:14:13.291736 kubelet[1633]: I0212 19:14:13.291707 1633 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:14:13.292162 kubelet[1633]: I0212 19:14:13.292144 1633 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:14:13.293710 kubelet[1633]: E0212 19:14:13.293689 1633 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 19:14:13.337257 kubelet[1633]: I0212 19:14:13.337225 1633 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:14:13.337973 kubelet[1633]: E0212 19:14:13.337944 1633 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Feb 12 19:14:13.358132 kubelet[1633]: I0212 19:14:13.358055 1633 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:14:13.359338 kubelet[1633]: I0212 19:14:13.359302 1633 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:14:13.360972 kubelet[1633]: I0212 19:14:13.360555 1633 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:14:13.366939 systemd[1]: Created slice kubepods-burstable-podb07df3aa5df0aa04f2d325e22e3237f6.slice. Feb 12 19:14:13.378405 systemd[1]: Created slice kubepods-burstable-pod2b0e94b38682f4e439413801d3cc54db.slice. Feb 12 19:14:13.395274 systemd[1]: Created slice kubepods-burstable-pod7709ea05d7cdf82b0d7e594b61a10331.slice. Feb 12 19:14:13.437426 kubelet[1633]: E0212 19:14:13.437388 1633 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="400ms" Feb 12 19:14:13.539340 kubelet[1633]: I0212 19:14:13.537902 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:14:13.539340 kubelet[1633]: I0212 19:14:13.538957 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:14:13.539340 kubelet[1633]: I0212 19:14:13.538991 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:14:13.539340 kubelet[1633]: I0212 19:14:13.539012 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:14:13.539340 kubelet[1633]: I0212 19:14:13.539030 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b07df3aa5df0aa04f2d325e22e3237f6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b07df3aa5df0aa04f2d325e22e3237f6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:14:13.539548 kubelet[1633]: I0212 19:14:13.539053 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b07df3aa5df0aa04f2d325e22e3237f6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b07df3aa5df0aa04f2d325e22e3237f6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:14:13.539548 kubelet[1633]: I0212 19:14:13.539082 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b07df3aa5df0aa04f2d325e22e3237f6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b07df3aa5df0aa04f2d325e22e3237f6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:14:13.539548 kubelet[1633]: I0212 19:14:13.539100 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:14:13.539548 kubelet[1633]: I0212 19:14:13.539118 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:14:13.539915 kubelet[1633]: I0212 19:14:13.539896 1633 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:14:13.540222 kubelet[1633]: E0212 19:14:13.540208 1633 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Feb 12 19:14:13.676069 kubelet[1633]: E0212 19:14:13.676036 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:13.677177 env[1140]: time="2024-02-12T19:14:13.677124526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b07df3aa5df0aa04f2d325e22e3237f6,Namespace:kube-system,Attempt:0,}" Feb 12 19:14:13.691811 kubelet[1633]: E0212 19:14:13.691771 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:13.692621 env[1140]: time="2024-02-12T19:14:13.692319653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,}" Feb 12 19:14:13.697714 kubelet[1633]: E0212 19:14:13.697667 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:13.698538 env[1140]: time="2024-02-12T19:14:13.698237530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,}" Feb 12 19:14:13.839184 kubelet[1633]: E0212 19:14:13.838546 1633 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="800ms" Feb 12 19:14:13.941595 kubelet[1633]: I0212 19:14:13.941542 1633 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:14:13.941901 kubelet[1633]: E0212 19:14:13.941865 1633 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Feb 12 19:14:14.117406 kubelet[1633]: W0212 19:14:14.117111 1633 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:14.117406 kubelet[1633]: E0212 19:14:14.117174 1633 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:14.217810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4129934687.mount: Deactivated successfully. Feb 12 19:14:14.228065 env[1140]: time="2024-02-12T19:14:14.228009543Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.233091 env[1140]: time="2024-02-12T19:14:14.233038116Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.235715 env[1140]: time="2024-02-12T19:14:14.235669260Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.237860 env[1140]: time="2024-02-12T19:14:14.237801608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.239567 env[1140]: time="2024-02-12T19:14:14.239523255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.241292 env[1140]: time="2024-02-12T19:14:14.241248427Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.242644 env[1140]: time="2024-02-12T19:14:14.242591140Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.245586 env[1140]: time="2024-02-12T19:14:14.245545172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.246938 env[1140]: time="2024-02-12T19:14:14.246869938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.249869 env[1140]: time="2024-02-12T19:14:14.249833905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.251708 env[1140]: time="2024-02-12T19:14:14.251661993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.254109 env[1140]: time="2024-02-12T19:14:14.254073965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:14.268323 kubelet[1633]: W0212 19:14:14.268211 1633 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:14.268323 kubelet[1633]: E0212 19:14:14.268277 1633 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:14.314840 env[1140]: time="2024-02-12T19:14:14.314715777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:14:14.314840 env[1140]: time="2024-02-12T19:14:14.314758401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:14:14.314840 env[1140]: time="2024-02-12T19:14:14.314769739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:14:14.315201 env[1140]: time="2024-02-12T19:14:14.315133249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:14:14.315243 env[1140]: time="2024-02-12T19:14:14.315212289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:14:14.315282 env[1140]: time="2024-02-12T19:14:14.315250707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:14:14.315423 env[1140]: time="2024-02-12T19:14:14.315377138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79c7fab1e14e0841f1602460a9da53bf88fc7fc49c20b9752d9ddc365d881dd2 pid=1688 runtime=io.containerd.runc.v2 Feb 12 19:14:14.315684 env[1140]: time="2024-02-12T19:14:14.315641458Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6671a030b324ea48897ff0c82ac29341262cb9df93800d4a353e56ca671a6df7 pid=1689 runtime=io.containerd.runc.v2 Feb 12 19:14:14.317095 env[1140]: time="2024-02-12T19:14:14.317015979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:14:14.317095 env[1140]: time="2024-02-12T19:14:14.317057963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:14:14.317095 env[1140]: time="2024-02-12T19:14:14.317068739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:14:14.317471 env[1140]: time="2024-02-12T19:14:14.317405249Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5d33a313ff9f93a371c5f26a65733fadf91f1599c99cf65d9910cfe5373398c pid=1693 runtime=io.containerd.runc.v2 Feb 12 19:14:14.328727 systemd[1]: Started cri-containerd-6671a030b324ea48897ff0c82ac29341262cb9df93800d4a353e56ca671a6df7.scope. Feb 12 19:14:14.335223 systemd[1]: Started cri-containerd-79c7fab1e14e0841f1602460a9da53bf88fc7fc49c20b9752d9ddc365d881dd2.scope. Feb 12 19:14:14.344368 systemd[1]: Started cri-containerd-e5d33a313ff9f93a371c5f26a65733fadf91f1599c99cf65d9910cfe5373398c.scope. Feb 12 19:14:14.437985 kubelet[1633]: W0212 19:14:14.432545 1633 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:14.437985 kubelet[1633]: E0212 19:14:14.432585 1633 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:14.445757 env[1140]: time="2024-02-12T19:14:14.443527198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,} returns sandbox id \"6671a030b324ea48897ff0c82ac29341262cb9df93800d4a353e56ca671a6df7\"" Feb 12 19:14:14.448374 env[1140]: time="2024-02-12T19:14:14.447617431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b07df3aa5df0aa04f2d325e22e3237f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5d33a313ff9f93a371c5f26a65733fadf91f1599c99cf65d9910cfe5373398c\"" Feb 12 19:14:14.448488 kubelet[1633]: E0212 19:14:14.447986 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:14.448488 kubelet[1633]: E0212 19:14:14.448211 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:14.448937 env[1140]: time="2024-02-12T19:14:14.448857789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,} returns sandbox id \"79c7fab1e14e0841f1602460a9da53bf88fc7fc49c20b9752d9ddc365d881dd2\"" Feb 12 19:14:14.449678 kubelet[1633]: E0212 19:14:14.449516 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:14.451386 env[1140]: time="2024-02-12T19:14:14.451341189Z" level=info msg="CreateContainer within sandbox \"6671a030b324ea48897ff0c82ac29341262cb9df93800d4a353e56ca671a6df7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:14:14.451502 env[1140]: time="2024-02-12T19:14:14.451369511Z" level=info msg="CreateContainer within sandbox \"e5d33a313ff9f93a371c5f26a65733fadf91f1599c99cf65d9910cfe5373398c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:14:14.451701 env[1140]: time="2024-02-12T19:14:14.451664238Z" level=info msg="CreateContainer within sandbox \"79c7fab1e14e0841f1602460a9da53bf88fc7fc49c20b9752d9ddc365d881dd2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:14:14.468272 kubelet[1633]: E0212 19:14:14.468154 1633 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333731e984e62", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 14, 13, 231103586, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 14, 13, 231103586, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.42:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.42:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:14:14.505852 env[1140]: time="2024-02-12T19:14:14.505797035Z" level=info msg="CreateContainer within sandbox \"6671a030b324ea48897ff0c82ac29341262cb9df93800d4a353e56ca671a6df7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4efc2f0ad71cb7c19fcd89a3b2a243b8ffb82a4dc0fc1d63e83d0c017b38ef5\"" Feb 12 19:14:14.506586 env[1140]: time="2024-02-12T19:14:14.506558107Z" level=info msg="StartContainer for \"f4efc2f0ad71cb7c19fcd89a3b2a243b8ffb82a4dc0fc1d63e83d0c017b38ef5\"" Feb 12 19:14:14.509456 env[1140]: time="2024-02-12T19:14:14.509413470Z" level=info msg="CreateContainer within sandbox \"e5d33a313ff9f93a371c5f26a65733fadf91f1599c99cf65d9910cfe5373398c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f1367d81ff69c30fcf995ed05a9fc19ee46880b6fc3136a7debc7f482b7cbf3d\"" Feb 12 19:14:14.509988 env[1140]: time="2024-02-12T19:14:14.509961620Z" level=info msg="StartContainer for \"f1367d81ff69c30fcf995ed05a9fc19ee46880b6fc3136a7debc7f482b7cbf3d\"" Feb 12 19:14:14.516915 env[1140]: time="2024-02-12T19:14:14.516851812Z" level=info msg="CreateContainer within sandbox \"79c7fab1e14e0841f1602460a9da53bf88fc7fc49c20b9752d9ddc365d881dd2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ceab8f74762c1f647c34d98c9fec5d35d63219f60bae8635f1b74cf303bcb3b\"" Feb 12 19:14:14.517446 env[1140]: time="2024-02-12T19:14:14.517414945Z" level=info msg="StartContainer for \"0ceab8f74762c1f647c34d98c9fec5d35d63219f60bae8635f1b74cf303bcb3b\"" Feb 12 19:14:14.522493 systemd[1]: Started cri-containerd-f4efc2f0ad71cb7c19fcd89a3b2a243b8ffb82a4dc0fc1d63e83d0c017b38ef5.scope. Feb 12 19:14:14.531176 systemd[1]: Started cri-containerd-f1367d81ff69c30fcf995ed05a9fc19ee46880b6fc3136a7debc7f482b7cbf3d.scope. Feb 12 19:14:14.554923 systemd[1]: Started cri-containerd-0ceab8f74762c1f647c34d98c9fec5d35d63219f60bae8635f1b74cf303bcb3b.scope. Feb 12 19:14:14.612474 env[1140]: time="2024-02-12T19:14:14.612427274Z" level=info msg="StartContainer for \"f1367d81ff69c30fcf995ed05a9fc19ee46880b6fc3136a7debc7f482b7cbf3d\" returns successfully" Feb 12 19:14:14.618522 env[1140]: time="2024-02-12T19:14:14.618476873Z" level=info msg="StartContainer for \"f4efc2f0ad71cb7c19fcd89a3b2a243b8ffb82a4dc0fc1d63e83d0c017b38ef5\" returns successfully" Feb 12 19:14:14.626389 env[1140]: time="2024-02-12T19:14:14.625906402Z" level=info msg="StartContainer for \"0ceab8f74762c1f647c34d98c9fec5d35d63219f60bae8635f1b74cf303bcb3b\" returns successfully" Feb 12 19:14:14.645700 kubelet[1633]: E0212 19:14:14.639265 1633 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="1.6s" Feb 12 19:14:14.707524 kubelet[1633]: W0212 19:14:14.707378 1633 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:14.707524 kubelet[1633]: E0212 19:14:14.707441 1633 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Feb 12 19:14:14.748027 kubelet[1633]: I0212 19:14:14.747990 1633 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:14:14.748358 kubelet[1633]: E0212 19:14:14.748332 1633 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Feb 12 19:14:15.265574 kubelet[1633]: E0212 19:14:15.265537 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:15.268252 kubelet[1633]: E0212 19:14:15.268229 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:15.270327 kubelet[1633]: E0212 19:14:15.270296 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:16.272417 kubelet[1633]: E0212 19:14:16.272387 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:16.349395 kubelet[1633]: I0212 19:14:16.349364 1633 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:14:16.526837 kubelet[1633]: E0212 19:14:16.526731 1633 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 12 19:14:16.593887 kubelet[1633]: I0212 19:14:16.593845 1633 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:14:16.596255 kubelet[1633]: E0212 19:14:16.596233 1633 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Feb 12 19:14:17.231772 kubelet[1633]: I0212 19:14:17.231731 1633 apiserver.go:52] "Watching apiserver" Feb 12 19:14:17.237156 kubelet[1633]: I0212 19:14:17.237124 1633 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 19:14:17.270765 kubelet[1633]: I0212 19:14:17.270724 1633 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:14:18.842611 kubelet[1633]: E0212 19:14:18.842579 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:19.200417 systemd[1]: Reloading. Feb 12 19:14:19.267211 /usr/lib/systemd/system-generators/torcx-generator[1930]: time="2024-02-12T19:14:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:14:19.267240 /usr/lib/systemd/system-generators/torcx-generator[1930]: time="2024-02-12T19:14:19Z" level=info msg="torcx already run" Feb 12 19:14:19.275600 kubelet[1633]: E0212 19:14:19.275549 1633 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:19.328270 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:14:19.328450 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:14:19.345145 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:14:19.431535 systemd[1]: Stopping kubelet.service... Feb 12 19:14:19.451334 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:14:19.451718 systemd[1]: Stopped kubelet.service. Feb 12 19:14:19.454335 systemd[1]: Started kubelet.service. Feb 12 19:14:19.522301 kubelet[1968]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:14:19.522301 kubelet[1968]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:14:19.522301 kubelet[1968]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:14:19.522745 kubelet[1968]: I0212 19:14:19.522346 1968 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:14:19.528483 kubelet[1968]: I0212 19:14:19.528449 1968 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 19:14:19.528621 kubelet[1968]: I0212 19:14:19.528609 1968 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:14:19.528945 kubelet[1968]: I0212 19:14:19.528926 1968 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 19:14:19.530525 kubelet[1968]: I0212 19:14:19.530500 1968 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:14:19.531666 kubelet[1968]: I0212 19:14:19.531623 1968 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:14:19.533283 kubelet[1968]: W0212 19:14:19.533264 1968 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:14:19.534259 kubelet[1968]: I0212 19:14:19.534234 1968 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:14:19.534580 kubelet[1968]: I0212 19:14:19.534564 1968 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:14:19.534727 kubelet[1968]: I0212 19:14:19.534713 1968 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:14:19.534896 kubelet[1968]: I0212 19:14:19.534864 1968 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:14:19.534982 kubelet[1968]: I0212 19:14:19.534970 1968 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 19:14:19.535075 kubelet[1968]: I0212 19:14:19.535059 1968 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:14:19.537579 kubelet[1968]: I0212 19:14:19.537559 1968 kubelet.go:405] "Attempting to sync node with API server" Feb 12 19:14:19.537579 kubelet[1968]: I0212 19:14:19.537585 1968 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:14:19.537697 kubelet[1968]: I0212 19:14:19.537609 1968 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:14:19.537697 kubelet[1968]: I0212 19:14:19.537623 1968 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:14:19.538910 kubelet[1968]: I0212 19:14:19.538849 1968 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:14:19.539434 kubelet[1968]: I0212 19:14:19.539404 1968 server.go:1168] "Started kubelet" Feb 12 19:14:19.539697 kubelet[1968]: I0212 19:14:19.539668 1968 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:14:19.540663 kubelet[1968]: I0212 19:14:19.540645 1968 server.go:461] "Adding debug handlers to kubelet server" Feb 12 19:14:19.541834 kubelet[1968]: I0212 19:14:19.539719 1968 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:14:19.542065 kubelet[1968]: E0212 19:14:19.542031 1968 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:14:19.542065 kubelet[1968]: E0212 19:14:19.542061 1968 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:14:19.552670 kubelet[1968]: I0212 19:14:19.552635 1968 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:14:19.560300 kubelet[1968]: I0212 19:14:19.560261 1968 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 19:14:19.560451 kubelet[1968]: I0212 19:14:19.560432 1968 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 19:14:19.570598 kubelet[1968]: I0212 19:14:19.570572 1968 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:14:19.572180 kubelet[1968]: I0212 19:14:19.572158 1968 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:14:19.572355 kubelet[1968]: I0212 19:14:19.572342 1968 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 19:14:19.572433 kubelet[1968]: I0212 19:14:19.572422 1968 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 19:14:19.572548 kubelet[1968]: E0212 19:14:19.572537 1968 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:14:19.576182 sudo[1990]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:14:19.576408 sudo[1990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:14:19.641402 kubelet[1968]: I0212 19:14:19.640185 1968 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:14:19.641402 kubelet[1968]: I0212 19:14:19.640221 1968 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:14:19.641402 kubelet[1968]: I0212 19:14:19.640242 1968 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:14:19.641402 kubelet[1968]: I0212 19:14:19.640438 1968 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:14:19.641402 kubelet[1968]: I0212 19:14:19.640453 1968 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:14:19.641402 kubelet[1968]: I0212 19:14:19.640461 1968 policy_none.go:49] "None policy: Start" Feb 12 19:14:19.641402 kubelet[1968]: I0212 19:14:19.641223 1968 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:14:19.641402 kubelet[1968]: I0212 19:14:19.641247 1968 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:14:19.641700 kubelet[1968]: I0212 19:14:19.641466 1968 state_mem.go:75] "Updated machine memory state" Feb 12 19:14:19.646125 kubelet[1968]: I0212 19:14:19.646100 1968 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:14:19.646415 kubelet[1968]: I0212 19:14:19.646343 1968 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:14:19.663535 kubelet[1968]: I0212 19:14:19.663498 1968 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:14:19.672101 kubelet[1968]: I0212 19:14:19.671917 1968 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 19:14:19.672101 kubelet[1968]: I0212 19:14:19.672026 1968 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:14:19.673981 kubelet[1968]: I0212 19:14:19.673954 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:14:19.674191 kubelet[1968]: I0212 19:14:19.674173 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:14:19.674302 kubelet[1968]: I0212 19:14:19.674287 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:14:19.681155 kubelet[1968]: E0212 19:14:19.681125 1968 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 19:14:19.861936 kubelet[1968]: I0212 19:14:19.861814 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:14:19.861936 kubelet[1968]: I0212 19:14:19.861863 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:14:19.861936 kubelet[1968]: I0212 19:14:19.861911 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:14:19.861936 kubelet[1968]: I0212 19:14:19.861934 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:14:19.862149 kubelet[1968]: I0212 19:14:19.861954 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:14:19.862149 kubelet[1968]: I0212 19:14:19.861973 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b07df3aa5df0aa04f2d325e22e3237f6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b07df3aa5df0aa04f2d325e22e3237f6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:14:19.862149 kubelet[1968]: I0212 19:14:19.861992 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b07df3aa5df0aa04f2d325e22e3237f6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b07df3aa5df0aa04f2d325e22e3237f6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:14:19.862149 kubelet[1968]: I0212 19:14:19.862021 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b07df3aa5df0aa04f2d325e22e3237f6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b07df3aa5df0aa04f2d325e22e3237f6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:14:19.862149 kubelet[1968]: I0212 19:14:19.862045 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:14:19.980721 kubelet[1968]: E0212 19:14:19.980675 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:19.982436 kubelet[1968]: E0212 19:14:19.981349 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:19.982436 kubelet[1968]: E0212 19:14:19.981587 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:20.061327 sudo[1990]: pam_unix(sudo:session): session closed for user root Feb 12 19:14:20.538203 kubelet[1968]: I0212 19:14:20.538165 1968 apiserver.go:52] "Watching apiserver" Feb 12 19:14:20.560634 kubelet[1968]: I0212 19:14:20.560593 1968 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 19:14:20.566677 kubelet[1968]: I0212 19:14:20.566646 1968 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:14:20.617238 kubelet[1968]: E0212 19:14:20.617200 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:20.625165 kubelet[1968]: E0212 19:14:20.625083 1968 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 19:14:20.625717 kubelet[1968]: E0212 19:14:20.625691 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:20.629143 kubelet[1968]: E0212 19:14:20.629108 1968 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 19:14:20.629660 kubelet[1968]: E0212 19:14:20.629640 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:20.648060 kubelet[1968]: I0212 19:14:20.648019 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.647971549 podCreationTimestamp="2024-02-12 19:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:14:20.640730616 +0000 UTC m=+1.183538497" watchObservedRunningTime="2024-02-12 19:14:20.647971549 +0000 UTC m=+1.190779470" Feb 12 19:14:20.656323 kubelet[1968]: I0212 19:14:20.656292 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.656160619 podCreationTimestamp="2024-02-12 19:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:14:20.648241278 +0000 UTC m=+1.191049159" watchObservedRunningTime="2024-02-12 19:14:20.656160619 +0000 UTC m=+1.198968540" Feb 12 19:14:20.656425 kubelet[1968]: I0212 19:14:20.656397 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.656381696 podCreationTimestamp="2024-02-12 19:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:14:20.655309185 +0000 UTC m=+1.198117106" watchObservedRunningTime="2024-02-12 19:14:20.656381696 +0000 UTC m=+1.199189617" Feb 12 19:14:21.619066 kubelet[1968]: E0212 19:14:21.619031 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:21.620238 kubelet[1968]: E0212 19:14:21.620214 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:21.920845 sudo[1234]: pam_unix(sudo:session): session closed for user root Feb 12 19:14:21.922841 sshd[1231]: pam_unix(sshd:session): session closed for user core Feb 12 19:14:21.925492 systemd[1]: sshd@4-10.0.0.42:22-10.0.0.1:48944.service: Deactivated successfully. Feb 12 19:14:21.926320 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:14:21.926504 systemd[1]: session-5.scope: Consumed 7.266s CPU time. Feb 12 19:14:21.926921 systemd-logind[1129]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:14:21.927637 systemd-logind[1129]: Removed session 5. Feb 12 19:14:22.620389 kubelet[1968]: E0212 19:14:22.620358 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:26.918417 kubelet[1968]: E0212 19:14:26.915445 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:27.631260 kubelet[1968]: E0212 19:14:27.631223 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:30.087366 kubelet[1968]: E0212 19:14:30.087336 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:31.266459 kubelet[1968]: E0212 19:14:31.266422 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:31.761298 kubelet[1968]: I0212 19:14:31.761210 1968 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:14:31.761723 env[1140]: time="2024-02-12T19:14:31.761632223Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:14:31.762006 kubelet[1968]: I0212 19:14:31.761792 1968 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:14:32.521751 kubelet[1968]: I0212 19:14:32.521702 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:14:32.526985 systemd[1]: Created slice kubepods-besteffort-pod2300bce8_e204_433d_b91c_803c925cd10d.slice. Feb 12 19:14:32.543831 kubelet[1968]: I0212 19:14:32.543774 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2300bce8-e204-433d-b91c-803c925cd10d-kube-proxy\") pod \"kube-proxy-wrj6b\" (UID: \"2300bce8-e204-433d-b91c-803c925cd10d\") " pod="kube-system/kube-proxy-wrj6b" Feb 12 19:14:32.543831 kubelet[1968]: I0212 19:14:32.543832 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjc2p\" (UniqueName: \"kubernetes.io/projected/2300bce8-e204-433d-b91c-803c925cd10d-kube-api-access-qjc2p\") pod \"kube-proxy-wrj6b\" (UID: \"2300bce8-e204-433d-b91c-803c925cd10d\") " pod="kube-system/kube-proxy-wrj6b" Feb 12 19:14:32.544032 kubelet[1968]: I0212 19:14:32.543857 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2300bce8-e204-433d-b91c-803c925cd10d-xtables-lock\") pod \"kube-proxy-wrj6b\" (UID: \"2300bce8-e204-433d-b91c-803c925cd10d\") " pod="kube-system/kube-proxy-wrj6b" Feb 12 19:14:32.544032 kubelet[1968]: I0212 19:14:32.543895 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2300bce8-e204-433d-b91c-803c925cd10d-lib-modules\") pod \"kube-proxy-wrj6b\" (UID: \"2300bce8-e204-433d-b91c-803c925cd10d\") " pod="kube-system/kube-proxy-wrj6b" Feb 12 19:14:32.544228 kubelet[1968]: I0212 19:14:32.544198 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:14:32.553389 systemd[1]: Created slice kubepods-burstable-pod69f7a0b3_52ae_4e36_acee_64daad43336b.slice. Feb 12 19:14:32.644397 kubelet[1968]: I0212 19:14:32.644362 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-config-path\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.644635 kubelet[1968]: I0212 19:14:32.644621 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hmmm\" (UniqueName: \"kubernetes.io/projected/69f7a0b3-52ae-4e36-acee-64daad43336b-kube-api-access-7hmmm\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.644712 kubelet[1968]: I0212 19:14:32.644702 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-hostproc\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.644817 kubelet[1968]: I0212 19:14:32.644796 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-cgroup\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.645001 kubelet[1968]: I0212 19:14:32.644983 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-host-proc-sys-kernel\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.645097 kubelet[1968]: I0212 19:14:32.645086 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69f7a0b3-52ae-4e36-acee-64daad43336b-hubble-tls\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.645194 kubelet[1968]: I0212 19:14:32.645184 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-run\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.645271 kubelet[1968]: I0212 19:14:32.645260 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-etc-cni-netd\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.645358 kubelet[1968]: I0212 19:14:32.645347 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cni-path\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.645562 kubelet[1968]: I0212 19:14:32.645519 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-bpf-maps\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.645731 kubelet[1968]: I0212 19:14:32.645717 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-lib-modules\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.645849 kubelet[1968]: I0212 19:14:32.645837 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-xtables-lock\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.645943 kubelet[1968]: I0212 19:14:32.645932 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69f7a0b3-52ae-4e36-acee-64daad43336b-clustermesh-secrets\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.646024 kubelet[1968]: I0212 19:14:32.646014 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-host-proc-sys-net\") pod \"cilium-jvnnz\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " pod="kube-system/cilium-jvnnz" Feb 12 19:14:32.726983 kubelet[1968]: I0212 19:14:32.726943 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:14:32.733096 systemd[1]: Created slice kubepods-besteffort-podebb3c31b_b83b_48e3_84c5_27b67f551477.slice. Feb 12 19:14:32.746696 kubelet[1968]: I0212 19:14:32.746652 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp5sg\" (UniqueName: \"kubernetes.io/projected/ebb3c31b-b83b-48e3-84c5-27b67f551477-kube-api-access-sp5sg\") pod \"cilium-operator-574c4bb98d-rhg62\" (UID: \"ebb3c31b-b83b-48e3-84c5-27b67f551477\") " pod="kube-system/cilium-operator-574c4bb98d-rhg62" Feb 12 19:14:32.746848 kubelet[1968]: I0212 19:14:32.746720 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebb3c31b-b83b-48e3-84c5-27b67f551477-cilium-config-path\") pod \"cilium-operator-574c4bb98d-rhg62\" (UID: \"ebb3c31b-b83b-48e3-84c5-27b67f551477\") " pod="kube-system/cilium-operator-574c4bb98d-rhg62" Feb 12 19:14:32.839665 kubelet[1968]: E0212 19:14:32.839633 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:32.840423 env[1140]: time="2024-02-12T19:14:32.840371618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wrj6b,Uid:2300bce8-e204-433d-b91c-803c925cd10d,Namespace:kube-system,Attempt:0,}" Feb 12 19:14:32.856484 kubelet[1968]: E0212 19:14:32.856433 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:32.857260 env[1140]: time="2024-02-12T19:14:32.856911750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvnnz,Uid:69f7a0b3-52ae-4e36-acee-64daad43336b,Namespace:kube-system,Attempt:0,}" Feb 12 19:14:32.880172 env[1140]: time="2024-02-12T19:14:32.880106444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:14:32.880172 env[1140]: time="2024-02-12T19:14:32.880146267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:14:32.880172 env[1140]: time="2024-02-12T19:14:32.880157113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:14:32.881849 env[1140]: time="2024-02-12T19:14:32.880977532Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b1e67d214865197b83e4ba07aa3fffdb1c099d0ae69c348dac5dad25e5e919cc pid=2066 runtime=io.containerd.runc.v2 Feb 12 19:14:32.889218 env[1140]: time="2024-02-12T19:14:32.889122328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:14:32.889218 env[1140]: time="2024-02-12T19:14:32.889168554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:14:32.889218 env[1140]: time="2024-02-12T19:14:32.889179520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:14:32.889548 env[1140]: time="2024-02-12T19:14:32.889505022Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65 pid=2085 runtime=io.containerd.runc.v2 Feb 12 19:14:32.894643 systemd[1]: Started cri-containerd-b1e67d214865197b83e4ba07aa3fffdb1c099d0ae69c348dac5dad25e5e919cc.scope. Feb 12 19:14:32.903157 systemd[1]: Started cri-containerd-1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65.scope. Feb 12 19:14:32.941003 env[1140]: time="2024-02-12T19:14:32.940955442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wrj6b,Uid:2300bce8-e204-433d-b91c-803c925cd10d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1e67d214865197b83e4ba07aa3fffdb1c099d0ae69c348dac5dad25e5e919cc\"" Feb 12 19:14:32.942380 kubelet[1968]: E0212 19:14:32.942350 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:32.944633 env[1140]: time="2024-02-12T19:14:32.944597039Z" level=info msg="CreateContainer within sandbox \"b1e67d214865197b83e4ba07aa3fffdb1c099d0ae69c348dac5dad25e5e919cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:14:32.949622 env[1140]: time="2024-02-12T19:14:32.949581027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvnnz,Uid:69f7a0b3-52ae-4e36-acee-64daad43336b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\"" Feb 12 19:14:32.950434 kubelet[1968]: E0212 19:14:32.950386 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:32.951910 env[1140]: time="2024-02-12T19:14:32.951868827Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:14:32.970938 update_engine[1131]: I0212 19:14:32.970892 1131 update_attempter.cc:509] Updating boot flags... Feb 12 19:14:33.034906 kubelet[1968]: E0212 19:14:33.034863 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:33.037473 env[1140]: time="2024-02-12T19:14:33.037413755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-rhg62,Uid:ebb3c31b-b83b-48e3-84c5-27b67f551477,Namespace:kube-system,Attempt:0,}" Feb 12 19:14:33.067530 env[1140]: time="2024-02-12T19:14:33.067478706Z" level=info msg="CreateContainer within sandbox \"b1e67d214865197b83e4ba07aa3fffdb1c099d0ae69c348dac5dad25e5e919cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"13b0e71d1e41ddea4022536d7723d67a0f327de1749040b0366dff3dd3cad2cc\"" Feb 12 19:14:33.069908 env[1140]: time="2024-02-12T19:14:33.068498648Z" level=info msg="StartContainer for \"13b0e71d1e41ddea4022536d7723d67a0f327de1749040b0366dff3dd3cad2cc\"" Feb 12 19:14:33.095474 systemd[1]: Started cri-containerd-13b0e71d1e41ddea4022536d7723d67a0f327de1749040b0366dff3dd3cad2cc.scope. Feb 12 19:14:33.128723 env[1140]: time="2024-02-12T19:14:33.128054325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:14:33.128864 env[1140]: time="2024-02-12T19:14:33.128750215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:14:33.128864 env[1140]: time="2024-02-12T19:14:33.128780752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:14:33.129240 env[1140]: time="2024-02-12T19:14:33.129203577Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c pid=2181 runtime=io.containerd.runc.v2 Feb 12 19:14:33.144236 systemd[1]: Started cri-containerd-c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c.scope. Feb 12 19:14:33.156239 env[1140]: time="2024-02-12T19:14:33.156094560Z" level=info msg="StartContainer for \"13b0e71d1e41ddea4022536d7723d67a0f327de1749040b0366dff3dd3cad2cc\" returns successfully" Feb 12 19:14:33.199526 env[1140]: time="2024-02-12T19:14:33.199467869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-rhg62,Uid:ebb3c31b-b83b-48e3-84c5-27b67f551477,Namespace:kube-system,Attempt:0,} returns sandbox id \"c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c\"" Feb 12 19:14:33.200033 kubelet[1968]: E0212 19:14:33.200003 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:33.642916 kubelet[1968]: E0212 19:14:33.641664 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:33.654464 kubelet[1968]: I0212 19:14:33.654154 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wrj6b" podStartSLOduration=1.654121254 podCreationTimestamp="2024-02-12 19:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:14:33.654104365 +0000 UTC m=+14.196912286" watchObservedRunningTime="2024-02-12 19:14:33.654121254 +0000 UTC m=+14.196929135" Feb 12 19:14:36.363403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2124433819.mount: Deactivated successfully. Feb 12 19:14:38.663165 env[1140]: time="2024-02-12T19:14:38.663075503Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:38.668538 env[1140]: time="2024-02-12T19:14:38.668166632Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:38.670089 env[1140]: time="2024-02-12T19:14:38.670052220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:38.670681 env[1140]: time="2024-02-12T19:14:38.670648830Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 12 19:14:38.672153 env[1140]: time="2024-02-12T19:14:38.672121686Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:14:38.674216 env[1140]: time="2024-02-12T19:14:38.673607067Z" level=info msg="CreateContainer within sandbox \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:14:38.691612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468913397.mount: Deactivated successfully. Feb 12 19:14:38.696403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015093415.mount: Deactivated successfully. Feb 12 19:14:38.703291 env[1140]: time="2024-02-12T19:14:38.703234095Z" level=info msg="CreateContainer within sandbox \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538\"" Feb 12 19:14:38.705144 env[1140]: time="2024-02-12T19:14:38.704267528Z" level=info msg="StartContainer for \"36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538\"" Feb 12 19:14:38.728197 systemd[1]: Started cri-containerd-36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538.scope. Feb 12 19:14:38.796374 env[1140]: time="2024-02-12T19:14:38.796318939Z" level=info msg="StartContainer for \"36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538\" returns successfully" Feb 12 19:14:38.856312 systemd[1]: cri-containerd-36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538.scope: Deactivated successfully. Feb 12 19:14:38.972666 env[1140]: time="2024-02-12T19:14:38.972531062Z" level=info msg="shim disconnected" id=36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538 Feb 12 19:14:38.972666 env[1140]: time="2024-02-12T19:14:38.972583564Z" level=warning msg="cleaning up after shim disconnected" id=36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538 namespace=k8s.io Feb 12 19:14:38.972666 env[1140]: time="2024-02-12T19:14:38.972593448Z" level=info msg="cleaning up dead shim" Feb 12 19:14:38.981070 env[1140]: time="2024-02-12T19:14:38.981024054Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:14:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2396 runtime=io.containerd.runc.v2\n" Feb 12 19:14:39.662457 kubelet[1968]: E0212 19:14:39.662412 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:39.674076 env[1140]: time="2024-02-12T19:14:39.673460210Z" level=info msg="CreateContainer within sandbox \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:14:39.688517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538-rootfs.mount: Deactivated successfully. Feb 12 19:14:39.708163 env[1140]: time="2024-02-12T19:14:39.708104809Z" level=info msg="CreateContainer within sandbox \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d\"" Feb 12 19:14:39.711205 env[1140]: time="2024-02-12T19:14:39.711012331Z" level=info msg="StartContainer for \"15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d\"" Feb 12 19:14:39.730367 systemd[1]: Started cri-containerd-15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d.scope. Feb 12 19:14:39.803704 env[1140]: time="2024-02-12T19:14:39.803640134Z" level=info msg="StartContainer for \"15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d\" returns successfully" Feb 12 19:14:39.832474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:14:39.832684 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:14:39.832953 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:14:39.834792 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:14:39.836688 systemd[1]: cri-containerd-15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d.scope: Deactivated successfully. Feb 12 19:14:39.852908 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:14:39.880447 env[1140]: time="2024-02-12T19:14:39.880398118Z" level=info msg="shim disconnected" id=15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d Feb 12 19:14:39.880447 env[1140]: time="2024-02-12T19:14:39.880446097Z" level=warning msg="cleaning up after shim disconnected" id=15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d namespace=k8s.io Feb 12 19:14:39.880734 env[1140]: time="2024-02-12T19:14:39.880458422Z" level=info msg="cleaning up dead shim" Feb 12 19:14:39.888519 env[1140]: time="2024-02-12T19:14:39.888471183Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:14:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2463 runtime=io.containerd.runc.v2\n" Feb 12 19:14:40.664365 kubelet[1968]: E0212 19:14:40.664324 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:40.672901 env[1140]: time="2024-02-12T19:14:40.671978856Z" level=info msg="CreateContainer within sandbox \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:14:40.688053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d-rootfs.mount: Deactivated successfully. Feb 12 19:14:40.708304 env[1140]: time="2024-02-12T19:14:40.708246230Z" level=info msg="CreateContainer within sandbox \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba\"" Feb 12 19:14:40.711468 env[1140]: time="2024-02-12T19:14:40.709318919Z" level=info msg="StartContainer for \"5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba\"" Feb 12 19:14:40.730701 systemd[1]: Started cri-containerd-5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba.scope. Feb 12 19:14:40.807957 systemd[1]: cri-containerd-5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba.scope: Deactivated successfully. Feb 12 19:14:40.812105 env[1140]: time="2024-02-12T19:14:40.812051241Z" level=info msg="StartContainer for \"5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba\" returns successfully" Feb 12 19:14:40.853372 env[1140]: time="2024-02-12T19:14:40.853322526Z" level=info msg="shim disconnected" id=5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba Feb 12 19:14:40.853372 env[1140]: time="2024-02-12T19:14:40.853369464Z" level=warning msg="cleaning up after shim disconnected" id=5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba namespace=k8s.io Feb 12 19:14:40.853372 env[1140]: time="2024-02-12T19:14:40.853379628Z" level=info msg="cleaning up dead shim" Feb 12 19:14:40.860647 env[1140]: time="2024-02-12T19:14:40.860604588Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:14:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2519 runtime=io.containerd.runc.v2\n" Feb 12 19:14:40.931802 env[1140]: time="2024-02-12T19:14:40.931685779Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:40.933545 env[1140]: time="2024-02-12T19:14:40.933504394Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:40.935402 env[1140]: time="2024-02-12T19:14:40.935364265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:14:40.936025 env[1140]: time="2024-02-12T19:14:40.935991464Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 12 19:14:40.939405 env[1140]: time="2024-02-12T19:14:40.939371755Z" level=info msg="CreateContainer within sandbox \"c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:14:40.951069 env[1140]: time="2024-02-12T19:14:40.951026487Z" level=info msg="CreateContainer within sandbox \"c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\"" Feb 12 19:14:40.951919 env[1140]: time="2024-02-12T19:14:40.951857965Z" level=info msg="StartContainer for \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\"" Feb 12 19:14:40.972077 systemd[1]: Started cri-containerd-5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3.scope. Feb 12 19:14:41.026181 env[1140]: time="2024-02-12T19:14:41.026106192Z" level=info msg="StartContainer for \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\" returns successfully" Feb 12 19:14:41.667481 kubelet[1968]: E0212 19:14:41.667452 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:41.669632 kubelet[1968]: E0212 19:14:41.669603 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:41.671838 env[1140]: time="2024-02-12T19:14:41.671784035Z" level=info msg="CreateContainer within sandbox \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:14:41.688588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba-rootfs.mount: Deactivated successfully. Feb 12 19:14:41.690167 env[1140]: time="2024-02-12T19:14:41.689200962Z" level=info msg="CreateContainer within sandbox \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86\"" Feb 12 19:14:41.690167 env[1140]: time="2024-02-12T19:14:41.690096010Z" level=info msg="StartContainer for \"48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86\"" Feb 12 19:14:41.710360 kubelet[1968]: I0212 19:14:41.710320 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-rhg62" podStartSLOduration=1.975575728 podCreationTimestamp="2024-02-12 19:14:32 +0000 UTC" firstStartedPulling="2024-02-12 19:14:33.201866785 +0000 UTC m=+13.744674706" lastFinishedPulling="2024-02-12 19:14:40.936568885 +0000 UTC m=+21.479376806" observedRunningTime="2024-02-12 19:14:41.68424215 +0000 UTC m=+22.227050071" watchObservedRunningTime="2024-02-12 19:14:41.710277828 +0000 UTC m=+22.253085749" Feb 12 19:14:41.716727 systemd[1]: run-containerd-runc-k8s.io-48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86-runc.YJKTRD.mount: Deactivated successfully. Feb 12 19:14:41.719837 systemd[1]: Started cri-containerd-48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86.scope. Feb 12 19:14:41.795475 env[1140]: time="2024-02-12T19:14:41.795416712Z" level=info msg="StartContainer for \"48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86\" returns successfully" Feb 12 19:14:41.797688 systemd[1]: cri-containerd-48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86.scope: Deactivated successfully. Feb 12 19:14:41.815013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86-rootfs.mount: Deactivated successfully. Feb 12 19:14:41.819719 env[1140]: time="2024-02-12T19:14:41.819661535Z" level=info msg="shim disconnected" id=48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86 Feb 12 19:14:41.819719 env[1140]: time="2024-02-12T19:14:41.819717876Z" level=warning msg="cleaning up after shim disconnected" id=48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86 namespace=k8s.io Feb 12 19:14:41.819924 env[1140]: time="2024-02-12T19:14:41.819728000Z" level=info msg="cleaning up dead shim" Feb 12 19:14:41.826590 env[1140]: time="2024-02-12T19:14:41.826544652Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:14:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2610 runtime=io.containerd.runc.v2\n" Feb 12 19:14:42.673104 kubelet[1968]: E0212 19:14:42.673070 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:42.673446 kubelet[1968]: E0212 19:14:42.673357 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:42.675105 env[1140]: time="2024-02-12T19:14:42.675067438Z" level=info msg="CreateContainer within sandbox \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:14:42.691687 env[1140]: time="2024-02-12T19:14:42.691636200Z" level=info msg="CreateContainer within sandbox \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\"" Feb 12 19:14:42.692152 env[1140]: time="2024-02-12T19:14:42.692126732Z" level=info msg="StartContainer for \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\"" Feb 12 19:14:42.711715 systemd[1]: run-containerd-runc-k8s.io-7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59-runc.WCn6Lw.mount: Deactivated successfully. Feb 12 19:14:42.715163 systemd[1]: Started cri-containerd-7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59.scope. Feb 12 19:14:42.773155 env[1140]: time="2024-02-12T19:14:42.773108411Z" level=info msg="StartContainer for \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\" returns successfully" Feb 12 19:14:42.929142 kubelet[1968]: I0212 19:14:42.929030 1968 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:14:42.955887 kubelet[1968]: I0212 19:14:42.955801 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:14:42.956393 kubelet[1968]: I0212 19:14:42.956353 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:14:42.961199 systemd[1]: Created slice kubepods-burstable-pod768cc57b_e8be_46b3_ac3c_2fa91fc20e59.slice. Feb 12 19:14:42.965977 systemd[1]: Created slice kubepods-burstable-pod982ec491_378a_4df0_a502_9681a5fbe246.slice. Feb 12 19:14:43.020536 kubelet[1968]: I0212 19:14:43.020494 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/768cc57b-e8be-46b3-ac3c-2fa91fc20e59-config-volume\") pod \"coredns-5d78c9869d-xtjw6\" (UID: \"768cc57b-e8be-46b3-ac3c-2fa91fc20e59\") " pod="kube-system/coredns-5d78c9869d-xtjw6" Feb 12 19:14:43.020692 kubelet[1968]: I0212 19:14:43.020597 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwl4r\" (UniqueName: \"kubernetes.io/projected/768cc57b-e8be-46b3-ac3c-2fa91fc20e59-kube-api-access-wwl4r\") pod \"coredns-5d78c9869d-xtjw6\" (UID: \"768cc57b-e8be-46b3-ac3c-2fa91fc20e59\") " pod="kube-system/coredns-5d78c9869d-xtjw6" Feb 12 19:14:43.020692 kubelet[1968]: I0212 19:14:43.020662 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n49c\" (UniqueName: \"kubernetes.io/projected/982ec491-378a-4df0-a502-9681a5fbe246-kube-api-access-9n49c\") pod \"coredns-5d78c9869d-5xwcl\" (UID: \"982ec491-378a-4df0-a502-9681a5fbe246\") " pod="kube-system/coredns-5d78c9869d-5xwcl" Feb 12 19:14:43.020692 kubelet[1968]: I0212 19:14:43.020684 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/982ec491-378a-4df0-a502-9681a5fbe246-config-volume\") pod \"coredns-5d78c9869d-5xwcl\" (UID: \"982ec491-378a-4df0-a502-9681a5fbe246\") " pod="kube-system/coredns-5d78c9869d-5xwcl" Feb 12 19:14:43.061914 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:14:43.265156 kubelet[1968]: E0212 19:14:43.265045 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:43.265792 env[1140]: time="2024-02-12T19:14:43.265740586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-xtjw6,Uid:768cc57b-e8be-46b3-ac3c-2fa91fc20e59,Namespace:kube-system,Attempt:0,}" Feb 12 19:14:43.268442 kubelet[1968]: E0212 19:14:43.268410 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:43.269021 env[1140]: time="2024-02-12T19:14:43.268973552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-5xwcl,Uid:982ec491-378a-4df0-a502-9681a5fbe246,Namespace:kube-system,Attempt:0,}" Feb 12 19:14:43.342909 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:14:43.678299 kubelet[1968]: E0212 19:14:43.678265 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:43.694241 kubelet[1968]: I0212 19:14:43.694191 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jvnnz" podStartSLOduration=5.973411714 podCreationTimestamp="2024-02-12 19:14:32 +0000 UTC" firstStartedPulling="2024-02-12 19:14:32.951186165 +0000 UTC m=+13.493994086" lastFinishedPulling="2024-02-12 19:14:38.671924203 +0000 UTC m=+19.214732124" observedRunningTime="2024-02-12 19:14:43.693644382 +0000 UTC m=+24.236452303" watchObservedRunningTime="2024-02-12 19:14:43.694149752 +0000 UTC m=+24.236957633" Feb 12 19:14:44.680107 kubelet[1968]: E0212 19:14:44.680068 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:45.001061 systemd-networkd[1041]: cilium_host: Link UP Feb 12 19:14:45.001977 systemd-networkd[1041]: cilium_net: Link UP Feb 12 19:14:45.002771 systemd-networkd[1041]: cilium_net: Gained carrier Feb 12 19:14:45.003386 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:14:45.003434 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:14:45.003566 systemd-networkd[1041]: cilium_host: Gained carrier Feb 12 19:14:45.129294 systemd-networkd[1041]: cilium_vxlan: Link UP Feb 12 19:14:45.129302 systemd-networkd[1041]: cilium_vxlan: Gained carrier Feb 12 19:14:45.435913 kernel: NET: Registered PF_ALG protocol family Feb 12 19:14:45.505237 systemd-networkd[1041]: cilium_host: Gained IPv6LL Feb 12 19:14:45.585244 systemd-networkd[1041]: cilium_net: Gained IPv6LL Feb 12 19:14:45.682480 kubelet[1968]: E0212 19:14:45.682447 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:46.113011 systemd-networkd[1041]: lxc_health: Link UP Feb 12 19:14:46.123910 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:14:46.124558 systemd-networkd[1041]: lxc_health: Gained carrier Feb 12 19:14:46.441495 systemd-networkd[1041]: lxc7c01893ef828: Link UP Feb 12 19:14:46.455922 kernel: eth0: renamed from tmpb0e9e Feb 12 19:14:46.464465 kernel: eth0: renamed from tmpefcf3 Feb 12 19:14:46.464322 systemd-networkd[1041]: lxc12a25c026b5f: Link UP Feb 12 19:14:46.474901 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc12a25c026b5f: link becomes ready Feb 12 19:14:46.475118 systemd-networkd[1041]: lxc12a25c026b5f: Gained carrier Feb 12 19:14:46.475908 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7c01893ef828: link becomes ready Feb 12 19:14:46.475908 systemd-networkd[1041]: lxc7c01893ef828: Gained carrier Feb 12 19:14:46.865343 kubelet[1968]: E0212 19:14:46.865314 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:47.057352 systemd-networkd[1041]: cilium_vxlan: Gained IPv6LL Feb 12 19:14:47.569110 systemd-networkd[1041]: lxc12a25c026b5f: Gained IPv6LL Feb 12 19:14:47.633054 systemd-networkd[1041]: lxc7c01893ef828: Gained IPv6LL Feb 12 19:14:47.685659 kubelet[1968]: E0212 19:14:47.685628 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:47.889013 systemd-networkd[1041]: lxc_health: Gained IPv6LL Feb 12 19:14:48.688011 kubelet[1968]: E0212 19:14:48.687982 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:48.861079 systemd[1]: Started sshd@5-10.0.0.42:22-10.0.0.1:41626.service. Feb 12 19:14:48.906813 sshd[3160]: Accepted publickey for core from 10.0.0.1 port 41626 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:14:48.910720 sshd[3160]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:14:48.915284 systemd-logind[1129]: New session 6 of user core. Feb 12 19:14:48.915334 systemd[1]: Started session-6.scope. Feb 12 19:14:49.123382 sshd[3160]: pam_unix(sshd:session): session closed for user core Feb 12 19:14:49.126300 systemd[1]: sshd@5-10.0.0.42:22-10.0.0.1:41626.service: Deactivated successfully. Feb 12 19:14:49.127103 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:14:49.127681 systemd-logind[1129]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:14:49.128426 systemd-logind[1129]: Removed session 6. Feb 12 19:14:50.241229 env[1140]: time="2024-02-12T19:14:50.241149798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:14:50.241741 env[1140]: time="2024-02-12T19:14:50.241695378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:14:50.241866 env[1140]: time="2024-02-12T19:14:50.241843736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:14:50.242271 env[1140]: time="2024-02-12T19:14:50.242225314Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0e9ed987ea81418c942ec6ede8d56071c4ae245e1e05cb08e128ef799cbcde5 pid=3190 runtime=io.containerd.runc.v2 Feb 12 19:14:50.260448 systemd[1]: Started cri-containerd-b0e9ed987ea81418c942ec6ede8d56071c4ae245e1e05cb08e128ef799cbcde5.scope. Feb 12 19:14:50.263900 env[1140]: time="2024-02-12T19:14:50.263812385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:14:50.263900 env[1140]: time="2024-02-12T19:14:50.263856756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:14:50.263900 env[1140]: time="2024-02-12T19:14:50.263867239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:14:50.264099 env[1140]: time="2024-02-12T19:14:50.264054407Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/efcf37b38629dad12502a2a2acb3af25cf7051ab3f4973b97e0c19b1dd54df55 pid=3213 runtime=io.containerd.runc.v2 Feb 12 19:14:50.284654 systemd[1]: run-containerd-runc-k8s.io-efcf37b38629dad12502a2a2acb3af25cf7051ab3f4973b97e0c19b1dd54df55-runc.yHQXj3.mount: Deactivated successfully. Feb 12 19:14:50.291826 systemd[1]: Started cri-containerd-efcf37b38629dad12502a2a2acb3af25cf7051ab3f4973b97e0c19b1dd54df55.scope. Feb 12 19:14:50.298947 systemd-resolved[1091]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:14:50.311665 systemd-resolved[1091]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:14:50.319658 env[1140]: time="2024-02-12T19:14:50.319618254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-xtjw6,Uid:768cc57b-e8be-46b3-ac3c-2fa91fc20e59,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0e9ed987ea81418c942ec6ede8d56071c4ae245e1e05cb08e128ef799cbcde5\"" Feb 12 19:14:50.320640 kubelet[1968]: E0212 19:14:50.320620 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:50.331737 env[1140]: time="2024-02-12T19:14:50.330868626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-5xwcl,Uid:982ec491-378a-4df0-a502-9681a5fbe246,Namespace:kube-system,Attempt:0,} returns sandbox id \"efcf37b38629dad12502a2a2acb3af25cf7051ab3f4973b97e0c19b1dd54df55\"" Feb 12 19:14:50.331737 env[1140]: time="2024-02-12T19:14:50.330928202Z" level=info msg="CreateContainer within sandbox \"b0e9ed987ea81418c942ec6ede8d56071c4ae245e1e05cb08e128ef799cbcde5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:14:50.331926 kubelet[1968]: E0212 19:14:50.331574 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:50.335748 env[1140]: time="2024-02-12T19:14:50.335701509Z" level=info msg="CreateContainer within sandbox \"efcf37b38629dad12502a2a2acb3af25cf7051ab3f4973b97e0c19b1dd54df55\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:14:50.404114 env[1140]: time="2024-02-12T19:14:50.404057765Z" level=info msg="CreateContainer within sandbox \"b0e9ed987ea81418c942ec6ede8d56071c4ae245e1e05cb08e128ef799cbcde5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ba6d12f0b74e38130814a79987efee0d5aa08b2f72a775f93aae4a61fe17a07\"" Feb 12 19:14:50.404678 env[1140]: time="2024-02-12T19:14:50.404632393Z" level=info msg="StartContainer for \"7ba6d12f0b74e38130814a79987efee0d5aa08b2f72a775f93aae4a61fe17a07\"" Feb 12 19:14:50.412800 env[1140]: time="2024-02-12T19:14:50.412742878Z" level=info msg="CreateContainer within sandbox \"efcf37b38629dad12502a2a2acb3af25cf7051ab3f4973b97e0c19b1dd54df55\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9617c16385d328af9a1b719f0353e637129dcd35bdc4ebba2cbf91b7a57b832a\"" Feb 12 19:14:50.413251 env[1140]: time="2024-02-12T19:14:50.413224962Z" level=info msg="StartContainer for \"9617c16385d328af9a1b719f0353e637129dcd35bdc4ebba2cbf91b7a57b832a\"" Feb 12 19:14:50.427583 systemd[1]: Started cri-containerd-7ba6d12f0b74e38130814a79987efee0d5aa08b2f72a775f93aae4a61fe17a07.scope. Feb 12 19:14:50.433912 systemd[1]: Started cri-containerd-9617c16385d328af9a1b719f0353e637129dcd35bdc4ebba2cbf91b7a57b832a.scope. Feb 12 19:14:50.464713 env[1140]: time="2024-02-12T19:14:50.464643263Z" level=info msg="StartContainer for \"7ba6d12f0b74e38130814a79987efee0d5aa08b2f72a775f93aae4a61fe17a07\" returns successfully" Feb 12 19:14:50.481227 env[1140]: time="2024-02-12T19:14:50.481171553Z" level=info msg="StartContainer for \"9617c16385d328af9a1b719f0353e637129dcd35bdc4ebba2cbf91b7a57b832a\" returns successfully" Feb 12 19:14:50.692090 kubelet[1968]: E0212 19:14:50.692064 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:50.694476 kubelet[1968]: E0212 19:14:50.694449 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:50.703465 kubelet[1968]: I0212 19:14:50.703433 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-5xwcl" podStartSLOduration=18.703397372 podCreationTimestamp="2024-02-12 19:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:14:50.702619652 +0000 UTC m=+31.245427573" watchObservedRunningTime="2024-02-12 19:14:50.703397372 +0000 UTC m=+31.246205293" Feb 12 19:14:50.725471 kubelet[1968]: I0212 19:14:50.725433 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-xtjw6" podStartSLOduration=18.725387066 podCreationTimestamp="2024-02-12 19:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:14:50.713090745 +0000 UTC m=+31.255898666" watchObservedRunningTime="2024-02-12 19:14:50.725387066 +0000 UTC m=+31.268194987" Feb 12 19:14:51.696305 kubelet[1968]: E0212 19:14:51.696278 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:51.696305 kubelet[1968]: E0212 19:14:51.696310 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:52.698338 kubelet[1968]: E0212 19:14:52.697862 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:52.698338 kubelet[1968]: E0212 19:14:52.697965 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:14:54.129412 systemd[1]: Started sshd@6-10.0.0.42:22-10.0.0.1:54212.service. Feb 12 19:14:54.172283 sshd[3354]: Accepted publickey for core from 10.0.0.1 port 54212 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:14:54.174217 sshd[3354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:14:54.178503 systemd[1]: Started session-7.scope. Feb 12 19:14:54.178654 systemd-logind[1129]: New session 7 of user core. Feb 12 19:14:54.308512 sshd[3354]: pam_unix(sshd:session): session closed for user core Feb 12 19:14:54.311079 systemd[1]: sshd@6-10.0.0.42:22-10.0.0.1:54212.service: Deactivated successfully. Feb 12 19:14:54.311850 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:14:54.312418 systemd-logind[1129]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:14:54.313092 systemd-logind[1129]: Removed session 7. Feb 12 19:14:59.315530 systemd[1]: Started sshd@7-10.0.0.42:22-10.0.0.1:54228.service. Feb 12 19:14:59.351942 sshd[3368]: Accepted publickey for core from 10.0.0.1 port 54228 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:14:59.353216 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:14:59.356940 systemd-logind[1129]: New session 8 of user core. Feb 12 19:14:59.357449 systemd[1]: Started session-8.scope. Feb 12 19:14:59.496912 sshd[3368]: pam_unix(sshd:session): session closed for user core Feb 12 19:14:59.499421 systemd[1]: sshd@7-10.0.0.42:22-10.0.0.1:54228.service: Deactivated successfully. Feb 12 19:14:59.500213 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:14:59.500734 systemd-logind[1129]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:14:59.501375 systemd-logind[1129]: Removed session 8. Feb 12 19:15:04.502224 systemd[1]: Started sshd@8-10.0.0.42:22-10.0.0.1:38510.service. Feb 12 19:15:04.537713 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 38510 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:04.539061 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:04.542520 systemd-logind[1129]: New session 9 of user core. Feb 12 19:15:04.543438 systemd[1]: Started session-9.scope. Feb 12 19:15:04.665250 sshd[3387]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:04.668535 systemd[1]: Started sshd@9-10.0.0.42:22-10.0.0.1:38520.service. Feb 12 19:15:04.670154 systemd[1]: sshd@8-10.0.0.42:22-10.0.0.1:38510.service: Deactivated successfully. Feb 12 19:15:04.670930 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:15:04.671486 systemd-logind[1129]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:15:04.672166 systemd-logind[1129]: Removed session 9. Feb 12 19:15:04.704747 sshd[3400]: Accepted publickey for core from 10.0.0.1 port 38520 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:04.706310 sshd[3400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:04.709517 systemd-logind[1129]: New session 10 of user core. Feb 12 19:15:04.710459 systemd[1]: Started session-10.scope. Feb 12 19:15:05.478362 sshd[3400]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:05.478921 systemd[1]: Started sshd@10-10.0.0.42:22-10.0.0.1:38528.service. Feb 12 19:15:05.483323 systemd[1]: sshd@9-10.0.0.42:22-10.0.0.1:38520.service: Deactivated successfully. Feb 12 19:15:05.484362 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:15:05.485445 systemd-logind[1129]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:15:05.494694 systemd-logind[1129]: Removed session 10. Feb 12 19:15:05.522389 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 38528 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:05.523821 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:05.528416 systemd[1]: Started session-11.scope. Feb 12 19:15:05.528802 systemd-logind[1129]: New session 11 of user core. Feb 12 19:15:05.668846 sshd[3411]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:05.671994 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:15:05.672584 systemd[1]: sshd@10-10.0.0.42:22-10.0.0.1:38528.service: Deactivated successfully. Feb 12 19:15:05.673568 systemd-logind[1129]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:15:05.674312 systemd-logind[1129]: Removed session 11. Feb 12 19:15:10.677339 systemd[1]: Started sshd@11-10.0.0.42:22-10.0.0.1:38540.service. Feb 12 19:15:10.718044 sshd[3426]: Accepted publickey for core from 10.0.0.1 port 38540 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:10.719451 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:10.723803 systemd-logind[1129]: New session 12 of user core. Feb 12 19:15:10.725001 systemd[1]: Started session-12.scope. Feb 12 19:15:10.858215 sshd[3426]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:10.863374 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:15:10.864023 systemd-logind[1129]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:15:10.864165 systemd[1]: sshd@11-10.0.0.42:22-10.0.0.1:38540.service: Deactivated successfully. Feb 12 19:15:10.865218 systemd-logind[1129]: Removed session 12. Feb 12 19:15:15.862013 systemd[1]: Started sshd@12-10.0.0.42:22-10.0.0.1:34694.service. Feb 12 19:15:15.896108 sshd[3439]: Accepted publickey for core from 10.0.0.1 port 34694 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:15.897765 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:15.901090 systemd-logind[1129]: New session 13 of user core. Feb 12 19:15:15.901974 systemd[1]: Started session-13.scope. Feb 12 19:15:16.013524 sshd[3439]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:16.017309 systemd[1]: Started sshd@13-10.0.0.42:22-10.0.0.1:34698.service. Feb 12 19:15:16.017965 systemd[1]: sshd@12-10.0.0.42:22-10.0.0.1:34694.service: Deactivated successfully. Feb 12 19:15:16.018687 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:15:16.019260 systemd-logind[1129]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:15:16.020059 systemd-logind[1129]: Removed session 13. Feb 12 19:15:16.051264 sshd[3451]: Accepted publickey for core from 10.0.0.1 port 34698 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:16.052544 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:16.055952 systemd-logind[1129]: New session 14 of user core. Feb 12 19:15:16.056412 systemd[1]: Started session-14.scope. Feb 12 19:15:16.273251 sshd[3451]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:16.275592 systemd[1]: Started sshd@14-10.0.0.42:22-10.0.0.1:34700.service. Feb 12 19:15:16.277350 systemd[1]: sshd@13-10.0.0.42:22-10.0.0.1:34698.service: Deactivated successfully. Feb 12 19:15:16.278016 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:15:16.278832 systemd-logind[1129]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:15:16.279440 systemd-logind[1129]: Removed session 14. Feb 12 19:15:16.319670 sshd[3462]: Accepted publickey for core from 10.0.0.1 port 34700 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:16.320819 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:16.324241 systemd-logind[1129]: New session 15 of user core. Feb 12 19:15:16.325113 systemd[1]: Started session-15.scope. Feb 12 19:15:17.208644 sshd[3462]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:17.212347 systemd[1]: Started sshd@15-10.0.0.42:22-10.0.0.1:34702.service. Feb 12 19:15:17.212871 systemd[1]: sshd@14-10.0.0.42:22-10.0.0.1:34700.service: Deactivated successfully. Feb 12 19:15:17.214028 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:15:17.215938 systemd-logind[1129]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:15:17.217515 systemd-logind[1129]: Removed session 15. Feb 12 19:15:17.251465 sshd[3486]: Accepted publickey for core from 10.0.0.1 port 34702 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:17.254370 sshd[3486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:17.257687 systemd-logind[1129]: New session 16 of user core. Feb 12 19:15:17.258531 systemd[1]: Started session-16.scope. Feb 12 19:15:17.582060 sshd[3486]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:17.586992 systemd[1]: Started sshd@16-10.0.0.42:22-10.0.0.1:34712.service. Feb 12 19:15:17.588924 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:15:17.589489 systemd[1]: sshd@15-10.0.0.42:22-10.0.0.1:34702.service: Deactivated successfully. Feb 12 19:15:17.592407 systemd-logind[1129]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:15:17.593491 systemd-logind[1129]: Removed session 16. Feb 12 19:15:17.625827 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 34712 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:17.627336 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:17.631466 systemd[1]: Started session-17.scope. Feb 12 19:15:17.631634 systemd-logind[1129]: New session 17 of user core. Feb 12 19:15:17.758738 sshd[3497]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:17.761174 systemd[1]: sshd@16-10.0.0.42:22-10.0.0.1:34712.service: Deactivated successfully. Feb 12 19:15:17.761923 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:15:17.762588 systemd-logind[1129]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:15:17.763353 systemd-logind[1129]: Removed session 17. Feb 12 19:15:22.763576 systemd[1]: Started sshd@17-10.0.0.42:22-10.0.0.1:42972.service. Feb 12 19:15:22.799114 sshd[3516]: Accepted publickey for core from 10.0.0.1 port 42972 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:22.800574 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:22.804657 systemd-logind[1129]: New session 18 of user core. Feb 12 19:15:22.805174 systemd[1]: Started session-18.scope. Feb 12 19:15:22.929508 sshd[3516]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:22.933245 systemd[1]: sshd@17-10.0.0.42:22-10.0.0.1:42972.service: Deactivated successfully. Feb 12 19:15:22.933972 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:15:22.934772 systemd-logind[1129]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:15:22.935663 systemd-logind[1129]: Removed session 18. Feb 12 19:15:27.935521 systemd[1]: Started sshd@18-10.0.0.42:22-10.0.0.1:42982.service. Feb 12 19:15:27.976494 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 42982 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:27.978234 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:27.982243 systemd-logind[1129]: New session 19 of user core. Feb 12 19:15:27.986789 systemd[1]: Started session-19.scope. Feb 12 19:15:28.112074 sshd[3529]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:28.115259 systemd[1]: sshd@18-10.0.0.42:22-10.0.0.1:42982.service: Deactivated successfully. Feb 12 19:15:28.116129 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:15:28.116681 systemd-logind[1129]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:15:28.117531 systemd-logind[1129]: Removed session 19. Feb 12 19:15:33.116408 systemd[1]: Started sshd@19-10.0.0.42:22-10.0.0.1:39480.service. Feb 12 19:15:33.152925 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 39480 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:33.154318 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:33.160961 systemd-logind[1129]: New session 20 of user core. Feb 12 19:15:33.162047 systemd[1]: Started session-20.scope. Feb 12 19:15:33.298997 sshd[3542]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:33.301473 systemd[1]: sshd@19-10.0.0.42:22-10.0.0.1:39480.service: Deactivated successfully. Feb 12 19:15:33.302271 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:15:33.303606 systemd-logind[1129]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:15:33.304399 systemd-logind[1129]: Removed session 20. Feb 12 19:15:38.302770 systemd[1]: Started sshd@20-10.0.0.42:22-10.0.0.1:39496.service. Feb 12 19:15:38.337409 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 39496 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:38.339034 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:38.342503 systemd-logind[1129]: New session 21 of user core. Feb 12 19:15:38.343501 systemd[1]: Started session-21.scope. Feb 12 19:15:38.450971 sshd[3557]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:38.454655 systemd[1]: Started sshd@21-10.0.0.42:22-10.0.0.1:39504.service. Feb 12 19:15:38.455190 systemd[1]: sshd@20-10.0.0.42:22-10.0.0.1:39496.service: Deactivated successfully. Feb 12 19:15:38.455948 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:15:38.456588 systemd-logind[1129]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:15:38.457454 systemd-logind[1129]: Removed session 21. Feb 12 19:15:38.491135 sshd[3569]: Accepted publickey for core from 10.0.0.1 port 39504 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:38.492318 sshd[3569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:38.495701 systemd-logind[1129]: New session 22 of user core. Feb 12 19:15:38.496542 systemd[1]: Started session-22.scope. Feb 12 19:15:38.573546 kubelet[1968]: E0212 19:15:38.573495 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:41.066683 env[1140]: time="2024-02-12T19:15:41.064517364Z" level=info msg="StopContainer for \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\" with timeout 30 (s)" Feb 12 19:15:41.066683 env[1140]: time="2024-02-12T19:15:41.064885788Z" level=info msg="Stop container \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\" with signal terminated" Feb 12 19:15:41.077097 systemd[1]: cri-containerd-5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3.scope: Deactivated successfully. Feb 12 19:15:41.096301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3-rootfs.mount: Deactivated successfully. Feb 12 19:15:41.102745 env[1140]: time="2024-02-12T19:15:41.102678053Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:15:41.109328 env[1140]: time="2024-02-12T19:15:41.109206094Z" level=info msg="shim disconnected" id=5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3 Feb 12 19:15:41.109328 env[1140]: time="2024-02-12T19:15:41.109252972Z" level=warning msg="cleaning up after shim disconnected" id=5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3 namespace=k8s.io Feb 12 19:15:41.109328 env[1140]: time="2024-02-12T19:15:41.109262971Z" level=info msg="cleaning up dead shim" Feb 12 19:15:41.110331 env[1140]: time="2024-02-12T19:15:41.110300047Z" level=info msg="StopContainer for \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\" with timeout 1 (s)" Feb 12 19:15:41.110631 env[1140]: time="2024-02-12T19:15:41.110599274Z" level=info msg="Stop container \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\" with signal terminated" Feb 12 19:15:41.117715 systemd-networkd[1041]: lxc_health: Link DOWN Feb 12 19:15:41.117723 systemd-networkd[1041]: lxc_health: Lost carrier Feb 12 19:15:41.118859 env[1140]: time="2024-02-12T19:15:41.118830802Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:15:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3617 runtime=io.containerd.runc.v2\n" Feb 12 19:15:41.121080 env[1140]: time="2024-02-12T19:15:41.121035908Z" level=info msg="StopContainer for \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\" returns successfully" Feb 12 19:15:41.121685 env[1140]: time="2024-02-12T19:15:41.121660001Z" level=info msg="StopPodSandbox for \"c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c\"" Feb 12 19:15:41.121733 env[1140]: time="2024-02-12T19:15:41.121721399Z" level=info msg="Container to stop \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:15:41.123221 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c-shm.mount: Deactivated successfully. Feb 12 19:15:41.129179 systemd[1]: cri-containerd-c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c.scope: Deactivated successfully. Feb 12 19:15:41.148679 systemd[1]: cri-containerd-7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59.scope: Deactivated successfully. Feb 12 19:15:41.149051 systemd[1]: cri-containerd-7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59.scope: Consumed 7.065s CPU time. Feb 12 19:15:41.153240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c-rootfs.mount: Deactivated successfully. Feb 12 19:15:41.166489 env[1140]: time="2024-02-12T19:15:41.166316213Z" level=info msg="shim disconnected" id=c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c Feb 12 19:15:41.166675 env[1140]: time="2024-02-12T19:15:41.166497525Z" level=warning msg="cleaning up after shim disconnected" id=c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c namespace=k8s.io Feb 12 19:15:41.166675 env[1140]: time="2024-02-12T19:15:41.166509204Z" level=info msg="cleaning up dead shim" Feb 12 19:15:41.167730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59-rootfs.mount: Deactivated successfully. Feb 12 19:15:41.173597 env[1140]: time="2024-02-12T19:15:41.173533304Z" level=info msg="shim disconnected" id=7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59 Feb 12 19:15:41.173597 env[1140]: time="2024-02-12T19:15:41.173584582Z" level=warning msg="cleaning up after shim disconnected" id=7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59 namespace=k8s.io Feb 12 19:15:41.173597 env[1140]: time="2024-02-12T19:15:41.173595061Z" level=info msg="cleaning up dead shim" Feb 12 19:15:41.175099 env[1140]: time="2024-02-12T19:15:41.175057399Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:15:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3670 runtime=io.containerd.runc.v2\n" Feb 12 19:15:41.175400 env[1140]: time="2024-02-12T19:15:41.175355506Z" level=info msg="TearDown network for sandbox \"c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c\" successfully" Feb 12 19:15:41.175400 env[1140]: time="2024-02-12T19:15:41.175385625Z" level=info msg="StopPodSandbox for \"c59051e588e60b7595a509ab938a2e8b3e2a63f1d885c2bb78289357e23cf08c\" returns successfully" Feb 12 19:15:41.188610 env[1140]: time="2024-02-12T19:15:41.186188283Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:15:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3683 runtime=io.containerd.runc.v2\n" Feb 12 19:15:41.188610 env[1140]: time="2024-02-12T19:15:41.188041484Z" level=info msg="StopContainer for \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\" returns successfully" Feb 12 19:15:41.188610 env[1140]: time="2024-02-12T19:15:41.188504784Z" level=info msg="StopPodSandbox for \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\"" Feb 12 19:15:41.188610 env[1140]: time="2024-02-12T19:15:41.188568541Z" level=info msg="Container to stop \"15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:15:41.188610 env[1140]: time="2024-02-12T19:15:41.188583901Z" level=info msg="Container to stop \"5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:15:41.188610 env[1140]: time="2024-02-12T19:15:41.188595780Z" level=info msg="Container to stop \"48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:15:41.188610 env[1140]: time="2024-02-12T19:15:41.188608460Z" level=info msg="Container to stop \"36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:15:41.188610 env[1140]: time="2024-02-12T19:15:41.188618979Z" level=info msg="Container to stop \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:15:41.194748 systemd[1]: cri-containerd-1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65.scope: Deactivated successfully. Feb 12 19:15:41.224902 env[1140]: time="2024-02-12T19:15:41.223210541Z" level=info msg="shim disconnected" id=1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65 Feb 12 19:15:41.225112 env[1140]: time="2024-02-12T19:15:41.224869590Z" level=warning msg="cleaning up after shim disconnected" id=1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65 namespace=k8s.io Feb 12 19:15:41.225112 env[1140]: time="2024-02-12T19:15:41.224937987Z" level=info msg="cleaning up dead shim" Feb 12 19:15:41.237997 env[1140]: time="2024-02-12T19:15:41.237944311Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:15:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3714 runtime=io.containerd.runc.v2\n" Feb 12 19:15:41.238290 env[1140]: time="2024-02-12T19:15:41.238265177Z" level=info msg="TearDown network for sandbox \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" successfully" Feb 12 19:15:41.238340 env[1140]: time="2024-02-12T19:15:41.238292816Z" level=info msg="StopPodSandbox for \"1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65\" returns successfully" Feb 12 19:15:41.278575 kubelet[1968]: I0212 19:15:41.278532 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-host-proc-sys-kernel\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.278575 kubelet[1968]: I0212 19:15:41.278577 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-run\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279122 kubelet[1968]: I0212 19:15:41.278603 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69f7a0b3-52ae-4e36-acee-64daad43336b-clustermesh-secrets\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279122 kubelet[1968]: I0212 19:15:41.278626 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-config-path\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279122 kubelet[1968]: I0212 19:15:41.278646 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebb3c31b-b83b-48e3-84c5-27b67f551477-cilium-config-path\") pod \"ebb3c31b-b83b-48e3-84c5-27b67f551477\" (UID: \"ebb3c31b-b83b-48e3-84c5-27b67f551477\") " Feb 12 19:15:41.279122 kubelet[1968]: I0212 19:15:41.278663 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-xtables-lock\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279122 kubelet[1968]: I0212 19:15:41.278684 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hmmm\" (UniqueName: \"kubernetes.io/projected/69f7a0b3-52ae-4e36-acee-64daad43336b-kube-api-access-7hmmm\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279122 kubelet[1968]: I0212 19:15:41.278702 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-hostproc\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279265 kubelet[1968]: I0212 19:15:41.278721 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-cgroup\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279265 kubelet[1968]: I0212 19:15:41.278739 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69f7a0b3-52ae-4e36-acee-64daad43336b-hubble-tls\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279265 kubelet[1968]: I0212 19:15:41.278756 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cni-path\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279265 kubelet[1968]: I0212 19:15:41.278774 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-host-proc-sys-net\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279265 kubelet[1968]: I0212 19:15:41.278793 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-bpf-maps\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279265 kubelet[1968]: I0212 19:15:41.278809 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-lib-modules\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279395 kubelet[1968]: I0212 19:15:41.278829 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp5sg\" (UniqueName: \"kubernetes.io/projected/ebb3c31b-b83b-48e3-84c5-27b67f551477-kube-api-access-sp5sg\") pod \"ebb3c31b-b83b-48e3-84c5-27b67f551477\" (UID: \"ebb3c31b-b83b-48e3-84c5-27b67f551477\") " Feb 12 19:15:41.279395 kubelet[1968]: I0212 19:15:41.278848 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-etc-cni-netd\") pod \"69f7a0b3-52ae-4e36-acee-64daad43336b\" (UID: \"69f7a0b3-52ae-4e36-acee-64daad43336b\") " Feb 12 19:15:41.279395 kubelet[1968]: I0212 19:15:41.279041 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:41.279395 kubelet[1968]: I0212 19:15:41.279042 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:41.279395 kubelet[1968]: I0212 19:15:41.279086 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:41.279556 kubelet[1968]: I0212 19:15:41.279319 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-hostproc" (OuterVolumeSpecName: "hostproc") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:41.279772 kubelet[1968]: W0212 19:15:41.279706 1968 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ebb3c31b-b83b-48e3-84c5-27b67f551477/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:15:41.280650 kubelet[1968]: I0212 19:15:41.279702 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:41.280650 kubelet[1968]: W0212 19:15:41.279716 1968 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/69f7a0b3-52ae-4e36-acee-64daad43336b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:15:41.280650 kubelet[1968]: I0212 19:15:41.280087 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:41.280650 kubelet[1968]: I0212 19:15:41.280132 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cni-path" (OuterVolumeSpecName: "cni-path") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:41.280650 kubelet[1968]: I0212 19:15:41.280150 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:41.280951 kubelet[1968]: I0212 19:15:41.280331 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:41.280951 kubelet[1968]: I0212 19:15:41.279757 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:41.281703 kubelet[1968]: I0212 19:15:41.281665 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb3c31b-b83b-48e3-84c5-27b67f551477-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ebb3c31b-b83b-48e3-84c5-27b67f551477" (UID: "ebb3c31b-b83b-48e3-84c5-27b67f551477"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:15:41.282359 kubelet[1968]: I0212 19:15:41.282331 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:15:41.283835 kubelet[1968]: I0212 19:15:41.283792 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f7a0b3-52ae-4e36-acee-64daad43336b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:15:41.283930 kubelet[1968]: I0212 19:15:41.283909 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f7a0b3-52ae-4e36-acee-64daad43336b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:15:41.284250 kubelet[1968]: I0212 19:15:41.284210 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f7a0b3-52ae-4e36-acee-64daad43336b-kube-api-access-7hmmm" (OuterVolumeSpecName: "kube-api-access-7hmmm") pod "69f7a0b3-52ae-4e36-acee-64daad43336b" (UID: "69f7a0b3-52ae-4e36-acee-64daad43336b"). InnerVolumeSpecName "kube-api-access-7hmmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:15:41.286206 kubelet[1968]: I0212 19:15:41.286165 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb3c31b-b83b-48e3-84c5-27b67f551477-kube-api-access-sp5sg" (OuterVolumeSpecName: "kube-api-access-sp5sg") pod "ebb3c31b-b83b-48e3-84c5-27b67f551477" (UID: "ebb3c31b-b83b-48e3-84c5-27b67f551477"). InnerVolumeSpecName "kube-api-access-sp5sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:15:41.379678 kubelet[1968]: I0212 19:15:41.379630 1968 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.379678 kubelet[1968]: I0212 19:15:41.379673 1968 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.379678 kubelet[1968]: I0212 19:15:41.379686 1968 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69f7a0b3-52ae-4e36-acee-64daad43336b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.379923 kubelet[1968]: I0212 19:15:41.379700 1968 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7hmmm\" (UniqueName: \"kubernetes.io/projected/69f7a0b3-52ae-4e36-acee-64daad43336b-kube-api-access-7hmmm\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.379923 kubelet[1968]: I0212 19:15:41.379709 1968 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.379923 kubelet[1968]: I0212 19:15:41.379719 1968 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.379923 kubelet[1968]: I0212 19:15:41.379729 1968 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sp5sg\" (UniqueName: \"kubernetes.io/projected/ebb3c31b-b83b-48e3-84c5-27b67f551477-kube-api-access-sp5sg\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.379923 kubelet[1968]: I0212 19:15:41.379739 1968 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.379923 kubelet[1968]: I0212 19:15:41.379748 1968 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.379923 kubelet[1968]: I0212 19:15:41.379757 1968 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.379923 kubelet[1968]: I0212 19:15:41.379767 1968 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.380117 kubelet[1968]: I0212 19:15:41.379777 1968 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.380117 kubelet[1968]: I0212 19:15:41.379786 1968 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69f7a0b3-52ae-4e36-acee-64daad43336b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.380117 kubelet[1968]: I0212 19:15:41.379795 1968 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69f7a0b3-52ae-4e36-acee-64daad43336b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.380117 kubelet[1968]: I0212 19:15:41.379804 1968 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69f7a0b3-52ae-4e36-acee-64daad43336b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.380117 kubelet[1968]: I0212 19:15:41.379813 1968 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebb3c31b-b83b-48e3-84c5-27b67f551477-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:41.574350 kubelet[1968]: E0212 19:15:41.574311 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:41.581186 systemd[1]: Removed slice kubepods-burstable-pod69f7a0b3_52ae_4e36_acee_64daad43336b.slice. Feb 12 19:15:41.581270 systemd[1]: kubepods-burstable-pod69f7a0b3_52ae_4e36_acee_64daad43336b.slice: Consumed 7.397s CPU time. Feb 12 19:15:41.582162 systemd[1]: Removed slice kubepods-besteffort-podebb3c31b_b83b_48e3_84c5_27b67f551477.slice. Feb 12 19:15:41.792301 kubelet[1968]: I0212 19:15:41.792195 1968 scope.go:115] "RemoveContainer" containerID="5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3" Feb 12 19:15:41.795070 env[1140]: time="2024-02-12T19:15:41.795019260Z" level=info msg="RemoveContainer for \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\"" Feb 12 19:15:41.800142 env[1140]: time="2024-02-12T19:15:41.800099203Z" level=info msg="RemoveContainer for \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\" returns successfully" Feb 12 19:15:41.800455 kubelet[1968]: I0212 19:15:41.800430 1968 scope.go:115] "RemoveContainer" containerID="5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3" Feb 12 19:15:41.800697 env[1140]: time="2024-02-12T19:15:41.800628101Z" level=error msg="ContainerStatus for \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\": not found" Feb 12 19:15:41.801038 kubelet[1968]: E0212 19:15:41.801016 1968 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\": not found" containerID="5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3" Feb 12 19:15:41.802079 kubelet[1968]: I0212 19:15:41.802037 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3} err="failed to get container status \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f8566fe60459b35275f16589719bc8712112b2c849a320553d82577bf23dff3\": not found" Feb 12 19:15:41.802079 kubelet[1968]: I0212 19:15:41.802077 1968 scope.go:115] "RemoveContainer" containerID="7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59" Feb 12 19:15:41.803777 env[1140]: time="2024-02-12T19:15:41.803732248Z" level=info msg="RemoveContainer for \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\"" Feb 12 19:15:41.806779 env[1140]: time="2024-02-12T19:15:41.806328977Z" level=info msg="RemoveContainer for \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\" returns successfully" Feb 12 19:15:41.807270 kubelet[1968]: I0212 19:15:41.807248 1968 scope.go:115] "RemoveContainer" containerID="48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86" Feb 12 19:15:41.810141 env[1140]: time="2024-02-12T19:15:41.810101056Z" level=info msg="RemoveContainer for \"48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86\"" Feb 12 19:15:41.812983 env[1140]: time="2024-02-12T19:15:41.812796461Z" level=info msg="RemoveContainer for \"48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86\" returns successfully" Feb 12 19:15:41.813297 kubelet[1968]: I0212 19:15:41.813229 1968 scope.go:115] "RemoveContainer" containerID="5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba" Feb 12 19:15:41.818331 env[1140]: time="2024-02-12T19:15:41.817968879Z" level=info msg="RemoveContainer for \"5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba\"" Feb 12 19:15:41.820644 env[1140]: time="2024-02-12T19:15:41.820598287Z" level=info msg="RemoveContainer for \"5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba\" returns successfully" Feb 12 19:15:41.820857 kubelet[1968]: I0212 19:15:41.820833 1968 scope.go:115] "RemoveContainer" containerID="15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d" Feb 12 19:15:41.821785 env[1140]: time="2024-02-12T19:15:41.821755398Z" level=info msg="RemoveContainer for \"15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d\"" Feb 12 19:15:41.823954 env[1140]: time="2024-02-12T19:15:41.823914025Z" level=info msg="RemoveContainer for \"15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d\" returns successfully" Feb 12 19:15:41.824113 kubelet[1968]: I0212 19:15:41.824093 1968 scope.go:115] "RemoveContainer" containerID="36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538" Feb 12 19:15:41.825078 env[1140]: time="2024-02-12T19:15:41.825049497Z" level=info msg="RemoveContainer for \"36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538\"" Feb 12 19:15:41.827197 env[1140]: time="2024-02-12T19:15:41.827165726Z" level=info msg="RemoveContainer for \"36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538\" returns successfully" Feb 12 19:15:41.827368 kubelet[1968]: I0212 19:15:41.827349 1968 scope.go:115] "RemoveContainer" containerID="7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59" Feb 12 19:15:41.827650 env[1140]: time="2024-02-12T19:15:41.827586068Z" level=error msg="ContainerStatus for \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\": not found" Feb 12 19:15:41.827797 kubelet[1968]: E0212 19:15:41.827780 1968 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\": not found" containerID="7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59" Feb 12 19:15:41.827894 kubelet[1968]: I0212 19:15:41.827870 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59} err="failed to get container status \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f611c17e52390de879a4a8ddbbb9e86eed78e973eaf8d4fd9042e0003297b59\": not found" Feb 12 19:15:41.828057 kubelet[1968]: I0212 19:15:41.828039 1968 scope.go:115] "RemoveContainer" containerID="48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86" Feb 12 19:15:41.828316 env[1140]: time="2024-02-12T19:15:41.828267399Z" level=error msg="ContainerStatus for \"48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86\": not found" Feb 12 19:15:41.828453 kubelet[1968]: E0212 19:15:41.828437 1968 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86\": not found" containerID="48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86" Feb 12 19:15:41.828539 kubelet[1968]: I0212 19:15:41.828528 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86} err="failed to get container status \"48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86\": rpc error: code = NotFound desc = an error occurred when try to find container \"48488edbf130bdc4b7610081befac20718a1b7898e14efd777ae15f54d5c5f86\": not found" Feb 12 19:15:41.828601 kubelet[1968]: I0212 19:15:41.828591 1968 scope.go:115] "RemoveContainer" containerID="5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba" Feb 12 19:15:41.828834 env[1140]: time="2024-02-12T19:15:41.828792737Z" level=error msg="ContainerStatus for \"5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba\": not found" Feb 12 19:15:41.828980 kubelet[1968]: E0212 19:15:41.828963 1968 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba\": not found" containerID="5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba" Feb 12 19:15:41.829076 kubelet[1968]: I0212 19:15:41.829063 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba} err="failed to get container status \"5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b89c459a8196618b1e1216562501fbfebef91cdefd25126cb1ae711b684d4ba\": not found" Feb 12 19:15:41.829157 kubelet[1968]: I0212 19:15:41.829147 1968 scope.go:115] "RemoveContainer" containerID="15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d" Feb 12 19:15:41.829398 env[1140]: time="2024-02-12T19:15:41.829348353Z" level=error msg="ContainerStatus for \"15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d\": not found" Feb 12 19:15:41.829540 kubelet[1968]: E0212 19:15:41.829526 1968 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d\": not found" containerID="15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d" Feb 12 19:15:41.829627 kubelet[1968]: I0212 19:15:41.829615 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d} err="failed to get container status \"15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d\": rpc error: code = NotFound desc = an error occurred when try to find container \"15c8e08fdfe21265b97512a8bb45262cc06664088636a5cd86b6451225cf374d\": not found" Feb 12 19:15:41.829691 kubelet[1968]: I0212 19:15:41.829680 1968 scope.go:115] "RemoveContainer" containerID="36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538" Feb 12 19:15:41.829955 env[1140]: time="2024-02-12T19:15:41.829900249Z" level=error msg="ContainerStatus for \"36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538\": not found" Feb 12 19:15:41.830128 kubelet[1968]: E0212 19:15:41.830102 1968 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538\": not found" containerID="36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538" Feb 12 19:15:41.830170 kubelet[1968]: I0212 19:15:41.830142 1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538} err="failed to get container status \"36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538\": rpc error: code = NotFound desc = an error occurred when try to find container \"36784d085edd4113f7bedbef68ef2d8940afbe5519a1551c9c9beebbe1eb1538\": not found" Feb 12 19:15:42.068032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65-rootfs.mount: Deactivated successfully. Feb 12 19:15:42.068155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1204a3be3cc793e350c2fbacef19f393129ac2449358ac24b3a46328979daa65-shm.mount: Deactivated successfully. Feb 12 19:15:42.068222 systemd[1]: var-lib-kubelet-pods-ebb3c31b\x2db83b\x2d48e3\x2d84c5\x2d27b67f551477-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsp5sg.mount: Deactivated successfully. Feb 12 19:15:42.068281 systemd[1]: var-lib-kubelet-pods-69f7a0b3\x2d52ae\x2d4e36\x2dacee\x2d64daad43336b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7hmmm.mount: Deactivated successfully. Feb 12 19:15:42.068332 systemd[1]: var-lib-kubelet-pods-69f7a0b3\x2d52ae\x2d4e36\x2dacee\x2d64daad43336b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:15:42.068391 systemd[1]: var-lib-kubelet-pods-69f7a0b3\x2d52ae\x2d4e36\x2dacee\x2d64daad43336b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:15:43.021010 sshd[3569]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:43.024690 systemd[1]: Started sshd@22-10.0.0.42:22-10.0.0.1:38356.service. Feb 12 19:15:43.025293 systemd[1]: sshd@21-10.0.0.42:22-10.0.0.1:39504.service: Deactivated successfully. Feb 12 19:15:43.026137 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:15:43.026336 systemd[1]: session-22.scope: Consumed 1.874s CPU time. Feb 12 19:15:43.028850 systemd-logind[1129]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:15:43.031589 systemd-logind[1129]: Removed session 22. Feb 12 19:15:43.058828 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 38356 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:43.060115 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:43.064314 systemd-logind[1129]: New session 23 of user core. Feb 12 19:15:43.065187 systemd[1]: Started session-23.scope. Feb 12 19:15:43.576086 kubelet[1968]: I0212 19:15:43.576054 1968 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=69f7a0b3-52ae-4e36-acee-64daad43336b path="/var/lib/kubelet/pods/69f7a0b3-52ae-4e36-acee-64daad43336b/volumes" Feb 12 19:15:43.576607 kubelet[1968]: I0212 19:15:43.576591 1968 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ebb3c31b-b83b-48e3-84c5-27b67f551477 path="/var/lib/kubelet/pods/ebb3c31b-b83b-48e3-84c5-27b67f551477/volumes" Feb 12 19:15:43.862632 sshd[3735]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:43.866361 systemd[1]: Started sshd@23-10.0.0.42:22-10.0.0.1:38362.service. Feb 12 19:15:43.876948 systemd[1]: sshd@22-10.0.0.42:22-10.0.0.1:38356.service: Deactivated successfully. Feb 12 19:15:43.878152 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:15:43.878954 systemd-logind[1129]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:15:43.880089 systemd-logind[1129]: Removed session 23. Feb 12 19:15:43.891930 kubelet[1968]: I0212 19:15:43.891871 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:15:43.892051 kubelet[1968]: E0212 19:15:43.891944 1968 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69f7a0b3-52ae-4e36-acee-64daad43336b" containerName="apply-sysctl-overwrites" Feb 12 19:15:43.892051 kubelet[1968]: E0212 19:15:43.891965 1968 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69f7a0b3-52ae-4e36-acee-64daad43336b" containerName="mount-bpf-fs" Feb 12 19:15:43.892051 kubelet[1968]: E0212 19:15:43.891973 1968 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69f7a0b3-52ae-4e36-acee-64daad43336b" containerName="mount-cgroup" Feb 12 19:15:43.892051 kubelet[1968]: E0212 19:15:43.891980 1968 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ebb3c31b-b83b-48e3-84c5-27b67f551477" containerName="cilium-operator" Feb 12 19:15:43.892051 kubelet[1968]: E0212 19:15:43.891987 1968 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69f7a0b3-52ae-4e36-acee-64daad43336b" containerName="clean-cilium-state" Feb 12 19:15:43.892051 kubelet[1968]: E0212 19:15:43.891994 1968 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69f7a0b3-52ae-4e36-acee-64daad43336b" containerName="cilium-agent" Feb 12 19:15:43.892205 kubelet[1968]: I0212 19:15:43.892183 1968 memory_manager.go:346] "RemoveStaleState removing state" podUID="69f7a0b3-52ae-4e36-acee-64daad43336b" containerName="cilium-agent" Feb 12 19:15:43.892228 kubelet[1968]: I0212 19:15:43.892209 1968 memory_manager.go:346] "RemoveStaleState removing state" podUID="ebb3c31b-b83b-48e3-84c5-27b67f551477" containerName="cilium-operator" Feb 12 19:15:43.900184 systemd[1]: Created slice kubepods-burstable-pod51a4ec0d_d7b9_4110_ac3e_d1dbda3fc5fb.slice. Feb 12 19:15:43.907179 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 38362 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:43.910392 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:43.914178 systemd-logind[1129]: New session 24 of user core. Feb 12 19:15:43.915092 systemd[1]: Started session-24.scope. Feb 12 19:15:43.995051 kubelet[1968]: I0212 19:15:43.995006 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-bpf-maps\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995051 kubelet[1968]: I0212 19:15:43.995052 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-xtables-lock\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995234 kubelet[1968]: I0212 19:15:43.995073 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-ipsec-secrets\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995234 kubelet[1968]: I0212 19:15:43.995092 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-hubble-tls\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995234 kubelet[1968]: I0212 19:15:43.995191 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-host-proc-sys-net\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995307 kubelet[1968]: I0212 19:15:43.995266 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-config-path\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995332 kubelet[1968]: I0212 19:15:43.995312 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-hostproc\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995363 kubelet[1968]: I0212 19:15:43.995334 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cni-path\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995363 kubelet[1968]: I0212 19:15:43.995361 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-run\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995413 kubelet[1968]: I0212 19:15:43.995381 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-cgroup\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995413 kubelet[1968]: I0212 19:15:43.995409 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-etc-cni-netd\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995461 kubelet[1968]: I0212 19:15:43.995426 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-lib-modules\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995461 kubelet[1968]: I0212 19:15:43.995445 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l8jp\" (UniqueName: \"kubernetes.io/projected/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-kube-api-access-2l8jp\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995517 kubelet[1968]: I0212 19:15:43.995464 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-clustermesh-secrets\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:43.995517 kubelet[1968]: I0212 19:15:43.995484 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-host-proc-sys-kernel\") pod \"cilium-m8vx5\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " pod="kube-system/cilium-m8vx5" Feb 12 19:15:44.038406 sshd[3747]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:44.041699 systemd[1]: Started sshd@24-10.0.0.42:22-10.0.0.1:38368.service. Feb 12 19:15:44.049479 systemd[1]: sshd@23-10.0.0.42:22-10.0.0.1:38362.service: Deactivated successfully. Feb 12 19:15:44.050397 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:15:44.051075 systemd-logind[1129]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:15:44.054319 systemd-logind[1129]: Removed session 24. Feb 12 19:15:44.080843 sshd[3760]: Accepted publickey for core from 10.0.0.1 port 38368 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:15:44.082389 sshd[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:44.086019 systemd-logind[1129]: New session 25 of user core. Feb 12 19:15:44.086522 systemd[1]: Started session-25.scope. Feb 12 19:15:44.203346 kubelet[1968]: E0212 19:15:44.203234 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:44.203994 env[1140]: time="2024-02-12T19:15:44.203949026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8vx5,Uid:51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb,Namespace:kube-system,Attempt:0,}" Feb 12 19:15:44.219993 env[1140]: time="2024-02-12T19:15:44.219875379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:15:44.219993 env[1140]: time="2024-02-12T19:15:44.219954016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:15:44.220212 env[1140]: time="2024-02-12T19:15:44.219988095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:15:44.220212 env[1140]: time="2024-02-12T19:15:44.220158450Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e pid=3783 runtime=io.containerd.runc.v2 Feb 12 19:15:44.230863 systemd[1]: Started cri-containerd-98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e.scope. Feb 12 19:15:44.281470 env[1140]: time="2024-02-12T19:15:44.281401815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8vx5,Uid:51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e\"" Feb 12 19:15:44.282133 kubelet[1968]: E0212 19:15:44.282113 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:44.285269 env[1140]: time="2024-02-12T19:15:44.285228978Z" level=info msg="CreateContainer within sandbox \"98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:15:44.339751 env[1140]: time="2024-02-12T19:15:44.339669152Z" level=info msg="CreateContainer within sandbox \"98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd\"" Feb 12 19:15:44.340274 env[1140]: time="2024-02-12T19:15:44.340211215Z" level=info msg="StartContainer for \"7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd\"" Feb 12 19:15:44.354213 systemd[1]: Started cri-containerd-7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd.scope. Feb 12 19:15:44.372660 systemd[1]: cri-containerd-7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd.scope: Deactivated successfully. Feb 12 19:15:44.414095 env[1140]: time="2024-02-12T19:15:44.414035475Z" level=info msg="shim disconnected" id=7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd Feb 12 19:15:44.414310 env[1140]: time="2024-02-12T19:15:44.414106193Z" level=warning msg="cleaning up after shim disconnected" id=7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd namespace=k8s.io Feb 12 19:15:44.414310 env[1140]: time="2024-02-12T19:15:44.414117913Z" level=info msg="cleaning up dead shim" Feb 12 19:15:44.421596 env[1140]: time="2024-02-12T19:15:44.421481847Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:15:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3841 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:15:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 19:15:44.421833 env[1140]: time="2024-02-12T19:15:44.421727720Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Feb 12 19:15:44.422071 env[1140]: time="2024-02-12T19:15:44.422021511Z" level=error msg="Failed to pipe stderr of container \"7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd\"" error="reading from a closed fifo" Feb 12 19:15:44.422138 env[1140]: time="2024-02-12T19:15:44.422034630Z" level=error msg="Failed to pipe stdout of container \"7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd\"" error="reading from a closed fifo" Feb 12 19:15:44.423947 env[1140]: time="2024-02-12T19:15:44.423867654Z" level=error msg="StartContainer for \"7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 19:15:44.424203 kubelet[1968]: E0212 19:15:44.424133 1968 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd" Feb 12 19:15:44.424447 kubelet[1968]: E0212 19:15:44.424424 1968 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 19:15:44.424447 kubelet[1968]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 19:15:44.424447 kubelet[1968]: rm /hostbin/cilium-mount Feb 12 19:15:44.426038 kubelet[1968]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2l8jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-m8vx5_kube-system(51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 19:15:44.426038 kubelet[1968]: E0212 19:15:44.424473 1968 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-m8vx5" podUID=51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb Feb 12 19:15:44.663919 kubelet[1968]: E0212 19:15:44.663866 1968 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:15:44.808286 env[1140]: time="2024-02-12T19:15:44.808237568Z" level=info msg="StopPodSandbox for \"98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e\"" Feb 12 19:15:44.808593 env[1140]: time="2024-02-12T19:15:44.808537879Z" level=info msg="Container to stop \"7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:15:44.815092 systemd[1]: cri-containerd-98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e.scope: Deactivated successfully. Feb 12 19:15:44.839730 env[1140]: time="2024-02-12T19:15:44.839672045Z" level=info msg="shim disconnected" id=98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e Feb 12 19:15:44.840023 env[1140]: time="2024-02-12T19:15:44.840000395Z" level=warning msg="cleaning up after shim disconnected" id=98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e namespace=k8s.io Feb 12 19:15:44.840102 env[1140]: time="2024-02-12T19:15:44.840089033Z" level=info msg="cleaning up dead shim" Feb 12 19:15:44.847702 env[1140]: time="2024-02-12T19:15:44.847662201Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:15:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3873 runtime=io.containerd.runc.v2\n" Feb 12 19:15:44.848158 env[1140]: time="2024-02-12T19:15:44.848125427Z" level=info msg="TearDown network for sandbox \"98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e\" successfully" Feb 12 19:15:44.848256 env[1140]: time="2024-02-12T19:15:44.848236783Z" level=info msg="StopPodSandbox for \"98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e\" returns successfully" Feb 12 19:15:44.904631 kubelet[1968]: I0212 19:15:44.904586 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-xtables-lock\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.904631 kubelet[1968]: I0212 19:15:44.904637 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-etc-cni-netd\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.904865 kubelet[1968]: I0212 19:15:44.904669 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-lib-modules\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.904865 kubelet[1968]: I0212 19:15:44.904695 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-hubble-tls\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.904865 kubelet[1968]: I0212 19:15:44.904713 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cni-path\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.904865 kubelet[1968]: I0212 19:15:44.904731 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-bpf-maps\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.904865 kubelet[1968]: I0212 19:15:44.904753 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-config-path\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.904865 kubelet[1968]: I0212 19:15:44.904775 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-host-proc-sys-kernel\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.904865 kubelet[1968]: I0212 19:15:44.904792 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-run\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.904865 kubelet[1968]: I0212 19:15:44.904811 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-host-proc-sys-net\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.904865 kubelet[1968]: I0212 19:15:44.904835 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-cgroup\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.904865 kubelet[1968]: I0212 19:15:44.904861 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-clustermesh-secrets\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.905136 kubelet[1968]: I0212 19:15:44.904909 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-hostproc\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.905136 kubelet[1968]: I0212 19:15:44.904932 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-ipsec-secrets\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.905136 kubelet[1968]: I0212 19:15:44.904956 1968 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l8jp\" (UniqueName: \"kubernetes.io/projected/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-kube-api-access-2l8jp\") pod \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\" (UID: \"51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb\") " Feb 12 19:15:44.906715 kubelet[1968]: I0212 19:15:44.905239 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:44.906715 kubelet[1968]: I0212 19:15:44.905247 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:44.906715 kubelet[1968]: I0212 19:15:44.905274 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:44.906715 kubelet[1968]: I0212 19:15:44.905307 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:44.906715 kubelet[1968]: I0212 19:15:44.905308 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:44.906715 kubelet[1968]: I0212 19:15:44.905314 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-hostproc" (OuterVolumeSpecName: "hostproc") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:44.906715 kubelet[1968]: I0212 19:15:44.905332 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:44.906715 kubelet[1968]: I0212 19:15:44.905348 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:44.906715 kubelet[1968]: I0212 19:15:44.905511 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cni-path" (OuterVolumeSpecName: "cni-path") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:44.906715 kubelet[1968]: I0212 19:15:44.905526 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:15:44.906715 kubelet[1968]: W0212 19:15:44.905617 1968 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:15:44.907942 kubelet[1968]: I0212 19:15:44.907907 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-kube-api-access-2l8jp" (OuterVolumeSpecName: "kube-api-access-2l8jp") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "kube-api-access-2l8jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:15:44.908133 kubelet[1968]: I0212 19:15:44.908107 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:15:44.908187 kubelet[1968]: I0212 19:15:44.908112 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:15:44.908629 kubelet[1968]: I0212 19:15:44.908584 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:15:44.909819 kubelet[1968]: I0212 19:15:44.909772 1968 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" (UID: "51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005092 1968 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005132 1968 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005144 1968 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005154 1968 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005164 1968 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005174 1968 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005183 1968 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005193 1968 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005207 1968 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2l8jp\" (UniqueName: \"kubernetes.io/projected/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-kube-api-access-2l8jp\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005215 1968 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005225 1968 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005234 1968 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005243 1968 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.005239 kubelet[1968]: I0212 19:15:45.005252 1968 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.006535 kubelet[1968]: I0212 19:15:45.005269 1968 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 19:15:45.101432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e-rootfs.mount: Deactivated successfully. Feb 12 19:15:45.101540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98bdf46c840804d60674b19d73185028e03e2a92cff2b2212691316c20b1211e-shm.mount: Deactivated successfully. Feb 12 19:15:45.101595 systemd[1]: var-lib-kubelet-pods-51a4ec0d\x2dd7b9\x2d4110\x2dac3e\x2dd1dbda3fc5fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2l8jp.mount: Deactivated successfully. Feb 12 19:15:45.101662 systemd[1]: var-lib-kubelet-pods-51a4ec0d\x2dd7b9\x2d4110\x2dac3e\x2dd1dbda3fc5fb-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:15:45.101710 systemd[1]: var-lib-kubelet-pods-51a4ec0d\x2dd7b9\x2d4110\x2dac3e\x2dd1dbda3fc5fb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:15:45.101756 systemd[1]: var-lib-kubelet-pods-51a4ec0d\x2dd7b9\x2d4110\x2dac3e\x2dd1dbda3fc5fb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:15:45.578955 systemd[1]: Removed slice kubepods-burstable-pod51a4ec0d_d7b9_4110_ac3e_d1dbda3fc5fb.slice. Feb 12 19:15:45.811144 kubelet[1968]: I0212 19:15:45.809570 1968 scope.go:115] "RemoveContainer" containerID="7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd" Feb 12 19:15:45.812382 env[1140]: time="2024-02-12T19:15:45.812325293Z" level=info msg="RemoveContainer for \"7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd\"" Feb 12 19:15:45.815684 env[1140]: time="2024-02-12T19:15:45.815630804Z" level=info msg="RemoveContainer for \"7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd\" returns successfully" Feb 12 19:15:45.880341 kubelet[1968]: I0212 19:15:45.880177 1968 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:15:45.880341 kubelet[1968]: E0212 19:15:45.880247 1968 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" containerName="mount-cgroup" Feb 12 19:15:45.880341 kubelet[1968]: I0212 19:15:45.880288 1968 memory_manager.go:346] "RemoveStaleState removing state" podUID="51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb" containerName="mount-cgroup" Feb 12 19:15:45.885345 systemd[1]: Created slice kubepods-burstable-pode1fec409_b3cb_48b3_a35c_e359bfff7ded.slice. Feb 12 19:15:45.913837 kubelet[1968]: I0212 19:15:45.913781 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1fec409-b3cb-48b3-a35c-e359bfff7ded-etc-cni-netd\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.913837 kubelet[1968]: I0212 19:15:45.913835 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1fec409-b3cb-48b3-a35c-e359bfff7ded-xtables-lock\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914021 kubelet[1968]: I0212 19:15:45.913860 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1fec409-b3cb-48b3-a35c-e359bfff7ded-clustermesh-secrets\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914021 kubelet[1968]: I0212 19:15:45.913971 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1fec409-b3cb-48b3-a35c-e359bfff7ded-cilium-cgroup\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914067 kubelet[1968]: I0212 19:15:45.914033 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e1fec409-b3cb-48b3-a35c-e359bfff7ded-cilium-ipsec-secrets\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914067 kubelet[1968]: I0212 19:15:45.914060 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1fec409-b3cb-48b3-a35c-e359bfff7ded-hostproc\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914123 kubelet[1968]: I0212 19:15:45.914089 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1fec409-b3cb-48b3-a35c-e359bfff7ded-cni-path\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914149 kubelet[1968]: I0212 19:15:45.914123 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1fec409-b3cb-48b3-a35c-e359bfff7ded-bpf-maps\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914171 kubelet[1968]: I0212 19:15:45.914154 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1fec409-b3cb-48b3-a35c-e359bfff7ded-cilium-run\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914195 kubelet[1968]: I0212 19:15:45.914176 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1fec409-b3cb-48b3-a35c-e359bfff7ded-lib-modules\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914219 kubelet[1968]: I0212 19:15:45.914205 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1fec409-b3cb-48b3-a35c-e359bfff7ded-cilium-config-path\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914242 kubelet[1968]: I0212 19:15:45.914226 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1fec409-b3cb-48b3-a35c-e359bfff7ded-host-proc-sys-kernel\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914266 kubelet[1968]: I0212 19:15:45.914256 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6wq5\" (UniqueName: \"kubernetes.io/projected/e1fec409-b3cb-48b3-a35c-e359bfff7ded-kube-api-access-t6wq5\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914291 kubelet[1968]: I0212 19:15:45.914280 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1fec409-b3cb-48b3-a35c-e359bfff7ded-host-proc-sys-net\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:45.914313 kubelet[1968]: I0212 19:15:45.914300 1968 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1fec409-b3cb-48b3-a35c-e359bfff7ded-hubble-tls\") pod \"cilium-9qx4f\" (UID: \"e1fec409-b3cb-48b3-a35c-e359bfff7ded\") " pod="kube-system/cilium-9qx4f" Feb 12 19:15:46.189958 kubelet[1968]: E0212 19:15:46.189819 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:46.191330 env[1140]: time="2024-02-12T19:15:46.191293709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9qx4f,Uid:e1fec409-b3cb-48b3-a35c-e359bfff7ded,Namespace:kube-system,Attempt:0,}" Feb 12 19:15:46.212243 env[1140]: time="2024-02-12T19:15:46.212006390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:15:46.212243 env[1140]: time="2024-02-12T19:15:46.212050389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:15:46.212243 env[1140]: time="2024-02-12T19:15:46.212061269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:15:46.212243 env[1140]: time="2024-02-12T19:15:46.212216585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2 pid=3900 runtime=io.containerd.runc.v2 Feb 12 19:15:46.235670 systemd[1]: Started cri-containerd-b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2.scope. Feb 12 19:15:46.265019 env[1140]: time="2024-02-12T19:15:46.264971364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9qx4f,Uid:e1fec409-b3cb-48b3-a35c-e359bfff7ded,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2\"" Feb 12 19:15:46.265978 kubelet[1968]: E0212 19:15:46.265954 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:46.268054 env[1140]: time="2024-02-12T19:15:46.268020054Z" level=info msg="CreateContainer within sandbox \"b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:15:46.282810 env[1140]: time="2024-02-12T19:15:46.282751153Z" level=info msg="CreateContainer within sandbox \"b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c9f662ed5d1fc2eb6ee568462837449ae2dc674dda279e2386c6283a04f1fa1b\"" Feb 12 19:15:46.283401 env[1140]: time="2024-02-12T19:15:46.283369578Z" level=info msg="StartContainer for \"c9f662ed5d1fc2eb6ee568462837449ae2dc674dda279e2386c6283a04f1fa1b\"" Feb 12 19:15:46.297922 systemd[1]: Started cri-containerd-c9f662ed5d1fc2eb6ee568462837449ae2dc674dda279e2386c6283a04f1fa1b.scope. Feb 12 19:15:46.340121 env[1140]: time="2024-02-12T19:15:46.340072546Z" level=info msg="StartContainer for \"c9f662ed5d1fc2eb6ee568462837449ae2dc674dda279e2386c6283a04f1fa1b\" returns successfully" Feb 12 19:15:46.351193 systemd[1]: cri-containerd-c9f662ed5d1fc2eb6ee568462837449ae2dc674dda279e2386c6283a04f1fa1b.scope: Deactivated successfully. Feb 12 19:15:46.377320 env[1140]: time="2024-02-12T19:15:46.377248366Z" level=info msg="shim disconnected" id=c9f662ed5d1fc2eb6ee568462837449ae2dc674dda279e2386c6283a04f1fa1b Feb 12 19:15:46.377320 env[1140]: time="2024-02-12T19:15:46.377303684Z" level=warning msg="cleaning up after shim disconnected" id=c9f662ed5d1fc2eb6ee568462837449ae2dc674dda279e2386c6283a04f1fa1b namespace=k8s.io Feb 12 19:15:46.377320 env[1140]: time="2024-02-12T19:15:46.377314164Z" level=info msg="cleaning up dead shim" Feb 12 19:15:46.390425 env[1140]: time="2024-02-12T19:15:46.390367302Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:15:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3983 runtime=io.containerd.runc.v2\n" Feb 12 19:15:46.574201 kubelet[1968]: E0212 19:15:46.574117 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:46.813677 kubelet[1968]: E0212 19:15:46.813645 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:46.816282 env[1140]: time="2024-02-12T19:15:46.816236086Z" level=info msg="CreateContainer within sandbox \"b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:15:46.835899 env[1140]: time="2024-02-12T19:15:46.835757234Z" level=info msg="CreateContainer within sandbox \"b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09127aa8c9cd25cc10e38df345ac0ebd8fb35346b60cef159a1738e835c8f1ce\"" Feb 12 19:15:46.836503 env[1140]: time="2024-02-12T19:15:46.836469777Z" level=info msg="StartContainer for \"09127aa8c9cd25cc10e38df345ac0ebd8fb35346b60cef159a1738e835c8f1ce\"" Feb 12 19:15:46.851551 systemd[1]: Started cri-containerd-09127aa8c9cd25cc10e38df345ac0ebd8fb35346b60cef159a1738e835c8f1ce.scope. Feb 12 19:15:46.888791 env[1140]: time="2024-02-12T19:15:46.888726968Z" level=info msg="StartContainer for \"09127aa8c9cd25cc10e38df345ac0ebd8fb35346b60cef159a1738e835c8f1ce\" returns successfully" Feb 12 19:15:46.893689 systemd[1]: cri-containerd-09127aa8c9cd25cc10e38df345ac0ebd8fb35346b60cef159a1738e835c8f1ce.scope: Deactivated successfully. Feb 12 19:15:46.915247 env[1140]: time="2024-02-12T19:15:46.915180796Z" level=info msg="shim disconnected" id=09127aa8c9cd25cc10e38df345ac0ebd8fb35346b60cef159a1738e835c8f1ce Feb 12 19:15:46.915247 env[1140]: time="2024-02-12T19:15:46.915229515Z" level=warning msg="cleaning up after shim disconnected" id=09127aa8c9cd25cc10e38df345ac0ebd8fb35346b60cef159a1738e835c8f1ce namespace=k8s.io Feb 12 19:15:46.915247 env[1140]: time="2024-02-12T19:15:46.915239594Z" level=info msg="cleaning up dead shim" Feb 12 19:15:46.927150 env[1140]: time="2024-02-12T19:15:46.927098600Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:15:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4045 runtime=io.containerd.runc.v2\n" Feb 12 19:15:47.519602 kubelet[1968]: W0212 19:15:47.519538 1968 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51a4ec0d_d7b9_4110_ac3e_d1dbda3fc5fb.slice/cri-containerd-7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd.scope WatchSource:0}: container "7d2297817204bb732b47e7eed3608172a84eb5aafdf43978ca7eb6c684607bbd" in namespace "k8s.io": not found Feb 12 19:15:47.575711 kubelet[1968]: I0212 19:15:47.575671 1968 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb path="/var/lib/kubelet/pods/51a4ec0d-d7b9-4110-ac3e-d1dbda3fc5fb/volumes" Feb 12 19:15:47.819007 kubelet[1968]: E0212 19:15:47.818989 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:47.822614 env[1140]: time="2024-02-12T19:15:47.822526154Z" level=info msg="CreateContainer within sandbox \"b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:15:47.837306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009659374.mount: Deactivated successfully. Feb 12 19:15:47.841915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272439339.mount: Deactivated successfully. Feb 12 19:15:47.844471 env[1140]: time="2024-02-12T19:15:47.844417605Z" level=info msg="CreateContainer within sandbox \"b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"10cb57b0e383b8e7c1fc46f45c59d8180c5d7a9835ff59d69bba1c46fc80f2df\"" Feb 12 19:15:47.849659 env[1140]: time="2024-02-12T19:15:47.849492426Z" level=info msg="StartContainer for \"10cb57b0e383b8e7c1fc46f45c59d8180c5d7a9835ff59d69bba1c46fc80f2df\"" Feb 12 19:15:47.868361 systemd[1]: Started cri-containerd-10cb57b0e383b8e7c1fc46f45c59d8180c5d7a9835ff59d69bba1c46fc80f2df.scope. Feb 12 19:15:47.906147 env[1140]: time="2024-02-12T19:15:47.906099997Z" level=info msg="StartContainer for \"10cb57b0e383b8e7c1fc46f45c59d8180c5d7a9835ff59d69bba1c46fc80f2df\" returns successfully" Feb 12 19:15:47.906737 systemd[1]: cri-containerd-10cb57b0e383b8e7c1fc46f45c59d8180c5d7a9835ff59d69bba1c46fc80f2df.scope: Deactivated successfully. Feb 12 19:15:47.932607 env[1140]: time="2024-02-12T19:15:47.932554559Z" level=info msg="shim disconnected" id=10cb57b0e383b8e7c1fc46f45c59d8180c5d7a9835ff59d69bba1c46fc80f2df Feb 12 19:15:47.932607 env[1140]: time="2024-02-12T19:15:47.932606038Z" level=warning msg="cleaning up after shim disconnected" id=10cb57b0e383b8e7c1fc46f45c59d8180c5d7a9835ff59d69bba1c46fc80f2df namespace=k8s.io Feb 12 19:15:47.932930 env[1140]: time="2024-02-12T19:15:47.932617038Z" level=info msg="cleaning up dead shim" Feb 12 19:15:47.940568 env[1140]: time="2024-02-12T19:15:47.940525323Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:15:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4104 runtime=io.containerd.runc.v2\n" Feb 12 19:15:48.822872 kubelet[1968]: E0212 19:15:48.822826 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:48.825901 env[1140]: time="2024-02-12T19:15:48.825167749Z" level=info msg="CreateContainer within sandbox \"b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:15:48.839762 env[1140]: time="2024-02-12T19:15:48.839690715Z" level=info msg="CreateContainer within sandbox \"b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9e4baace2fd6af46a879d97a247820cab16e84d52af6b333e1b47d2a61e301ea\"" Feb 12 19:15:48.840500 env[1140]: time="2024-02-12T19:15:48.840470502Z" level=info msg="StartContainer for \"9e4baace2fd6af46a879d97a247820cab16e84d52af6b333e1b47d2a61e301ea\"" Feb 12 19:15:48.862573 systemd[1]: Started cri-containerd-9e4baace2fd6af46a879d97a247820cab16e84d52af6b333e1b47d2a61e301ea.scope. Feb 12 19:15:48.902545 env[1140]: time="2024-02-12T19:15:48.902476542Z" level=info msg="StartContainer for \"9e4baace2fd6af46a879d97a247820cab16e84d52af6b333e1b47d2a61e301ea\" returns successfully" Feb 12 19:15:48.904220 systemd[1]: cri-containerd-9e4baace2fd6af46a879d97a247820cab16e84d52af6b333e1b47d2a61e301ea.scope: Deactivated successfully. Feb 12 19:15:48.928347 env[1140]: time="2024-02-12T19:15:48.927773054Z" level=info msg="shim disconnected" id=9e4baace2fd6af46a879d97a247820cab16e84d52af6b333e1b47d2a61e301ea Feb 12 19:15:48.928347 env[1140]: time="2024-02-12T19:15:48.927817373Z" level=warning msg="cleaning up after shim disconnected" id=9e4baace2fd6af46a879d97a247820cab16e84d52af6b333e1b47d2a61e301ea namespace=k8s.io Feb 12 19:15:48.928347 env[1140]: time="2024-02-12T19:15:48.927826373Z" level=info msg="cleaning up dead shim" Feb 12 19:15:48.936602 env[1140]: time="2024-02-12T19:15:48.936533952Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:15:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4161 runtime=io.containerd.runc.v2\n" Feb 12 19:15:49.205394 systemd[1]: run-containerd-runc-k8s.io-9e4baace2fd6af46a879d97a247820cab16e84d52af6b333e1b47d2a61e301ea-runc.Jf1z7X.mount: Deactivated successfully. Feb 12 19:15:49.205503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e4baace2fd6af46a879d97a247820cab16e84d52af6b333e1b47d2a61e301ea-rootfs.mount: Deactivated successfully. Feb 12 19:15:49.664870 kubelet[1968]: E0212 19:15:49.664823 1968 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:15:49.827495 kubelet[1968]: E0212 19:15:49.827471 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:49.830483 env[1140]: time="2024-02-12T19:15:49.830228543Z" level=info msg="CreateContainer within sandbox \"b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:15:49.849083 env[1140]: time="2024-02-12T19:15:49.849040382Z" level=info msg="CreateContainer within sandbox \"b1411e2ed100458859f0b0517c7e1f641e32b9cfa483e6bd11020dbe47aeb1c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a598d1c8fa20d48fd15a3f91f2aaa6ddce72b0ead4a7e5d02f33bd4b4e578c2\"" Feb 12 19:15:49.849804 env[1140]: time="2024-02-12T19:15:49.849742693Z" level=info msg="StartContainer for \"4a598d1c8fa20d48fd15a3f91f2aaa6ddce72b0ead4a7e5d02f33bd4b4e578c2\"" Feb 12 19:15:49.867497 systemd[1]: Started cri-containerd-4a598d1c8fa20d48fd15a3f91f2aaa6ddce72b0ead4a7e5d02f33bd4b4e578c2.scope. Feb 12 19:15:49.920500 env[1140]: time="2024-02-12T19:15:49.920381589Z" level=info msg="StartContainer for \"4a598d1c8fa20d48fd15a3f91f2aaa6ddce72b0ead4a7e5d02f33bd4b4e578c2\" returns successfully" Feb 12 19:15:50.165906 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 12 19:15:50.631753 kubelet[1968]: W0212 19:15:50.631679 1968 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1fec409_b3cb_48b3_a35c_e359bfff7ded.slice/cri-containerd-c9f662ed5d1fc2eb6ee568462837449ae2dc674dda279e2386c6283a04f1fa1b.scope WatchSource:0}: task c9f662ed5d1fc2eb6ee568462837449ae2dc674dda279e2386c6283a04f1fa1b not found: not found Feb 12 19:15:50.832122 kubelet[1968]: E0212 19:15:50.832077 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:50.852214 kubelet[1968]: I0212 19:15:50.852164 1968 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9qx4f" podStartSLOduration=5.852126903 podCreationTimestamp="2024-02-12 19:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:15:50.850795756 +0000 UTC m=+91.393603637" watchObservedRunningTime="2024-02-12 19:15:50.852126903 +0000 UTC m=+91.394934824" Feb 12 19:15:51.545430 kubelet[1968]: I0212 19:15:51.545385 1968 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:15:51.545320086 +0000 UTC m=+92.088128007 LastTransitionTime:2024-02-12 19:15:51.545320086 +0000 UTC m=+92.088128007 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:15:52.191313 kubelet[1968]: E0212 19:15:52.191287 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:52.364353 systemd[1]: run-containerd-runc-k8s.io-4a598d1c8fa20d48fd15a3f91f2aaa6ddce72b0ead4a7e5d02f33bd4b4e578c2-runc.3o71l2.mount: Deactivated successfully. Feb 12 19:15:52.912978 systemd-networkd[1041]: lxc_health: Link UP Feb 12 19:15:52.932964 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:15:52.933378 systemd-networkd[1041]: lxc_health: Gained carrier Feb 12 19:15:53.740258 kubelet[1968]: W0212 19:15:53.740106 1968 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1fec409_b3cb_48b3_a35c_e359bfff7ded.slice/cri-containerd-09127aa8c9cd25cc10e38df345ac0ebd8fb35346b60cef159a1738e835c8f1ce.scope WatchSource:0}: task 09127aa8c9cd25cc10e38df345ac0ebd8fb35346b60cef159a1738e835c8f1ce not found: not found Feb 12 19:15:54.191855 kubelet[1968]: E0212 19:15:54.191818 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:54.257108 systemd-networkd[1041]: lxc_health: Gained IPv6LL Feb 12 19:15:54.514634 systemd[1]: run-containerd-runc-k8s.io-4a598d1c8fa20d48fd15a3f91f2aaa6ddce72b0ead4a7e5d02f33bd4b4e578c2-runc.ji0JAb.mount: Deactivated successfully. Feb 12 19:15:54.573382 kubelet[1968]: E0212 19:15:54.573336 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:54.838998 kubelet[1968]: E0212 19:15:54.838957 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:55.840248 kubelet[1968]: E0212 19:15:55.840202 1968 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:15:56.847086 kubelet[1968]: W0212 19:15:56.847028 1968 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1fec409_b3cb_48b3_a35c_e359bfff7ded.slice/cri-containerd-10cb57b0e383b8e7c1fc46f45c59d8180c5d7a9835ff59d69bba1c46fc80f2df.scope WatchSource:0}: task 10cb57b0e383b8e7c1fc46f45c59d8180c5d7a9835ff59d69bba1c46fc80f2df not found: not found Feb 12 19:15:58.877646 sshd[3760]: pam_unix(sshd:session): session closed for user core Feb 12 19:15:58.881480 systemd[1]: sshd@24-10.0.0.42:22-10.0.0.1:38368.service: Deactivated successfully. Feb 12 19:15:58.882240 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:15:58.882871 systemd-logind[1129]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:15:58.884287 systemd-logind[1129]: Removed session 25. Feb 12 19:15:59.954470 kubelet[1968]: W0212 19:15:59.954408 1968 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1fec409_b3cb_48b3_a35c_e359bfff7ded.slice/cri-containerd-9e4baace2fd6af46a879d97a247820cab16e84d52af6b333e1b47d2a61e301ea.scope WatchSource:0}: task 9e4baace2fd6af46a879d97a247820cab16e84d52af6b333e1b47d2a61e301ea not found: not found