Feb 12 19:22:39.771376 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:22:39.771398 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:22:39.771406 kernel: efi: EFI v2.70 by EDK II Feb 12 19:22:39.771411 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 12 19:22:39.771416 kernel: random: crng init done Feb 12 19:22:39.771422 kernel: ACPI: Early table checksum verification disabled Feb 12 19:22:39.771428 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 12 19:22:39.771435 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:22:39.771440 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:22:39.771446 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:22:39.771451 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:22:39.771456 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:22:39.771462 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:22:39.771467 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:22:39.771475 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:22:39.771481 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:22:39.771487 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:22:39.771492 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 12 19:22:39.771498 kernel: NUMA: Failed to initialise from firmware Feb 12 19:22:39.771504 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:22:39.771509 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 12 19:22:39.771515 kernel: Zone ranges: Feb 12 19:22:39.771521 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:22:39.771527 kernel: DMA32 empty Feb 12 19:22:39.771533 kernel: Normal empty Feb 12 19:22:39.771539 kernel: Movable zone start for each node Feb 12 19:22:39.771544 kernel: Early memory node ranges Feb 12 19:22:39.771550 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 12 19:22:39.771556 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 12 19:22:39.771561 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 12 19:22:39.771567 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 12 19:22:39.771573 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 12 19:22:39.771578 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 12 19:22:39.771584 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 12 19:22:39.771590 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:22:39.771596 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 12 19:22:39.771603 kernel: psci: probing for conduit method from ACPI. Feb 12 19:22:39.771608 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:22:39.771614 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:22:39.771620 kernel: psci: Trusted OS migration not required Feb 12 19:22:39.771627 kernel: psci: SMC Calling Convention v1.1 Feb 12 19:22:39.771634 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 12 19:22:39.771641 kernel: ACPI: SRAT not present Feb 12 19:22:39.771647 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:22:39.771654 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:22:39.771660 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 12 19:22:39.771666 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:22:39.771680 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:22:39.771686 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:22:39.771692 kernel: CPU features: detected: Spectre-v4 Feb 12 19:22:39.771698 kernel: CPU features: detected: Spectre-BHB Feb 12 19:22:39.771706 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:22:39.771712 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:22:39.771718 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:22:39.771724 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 12 19:22:39.771730 kernel: Policy zone: DMA Feb 12 19:22:39.771737 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:22:39.771744 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:22:39.771750 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:22:39.771756 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:22:39.771762 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:22:39.771768 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 12 19:22:39.771775 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:22:39.771781 kernel: trace event string verifier disabled Feb 12 19:22:39.771787 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:22:39.771794 kernel: rcu: RCU event tracing is enabled. Feb 12 19:22:39.771800 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:22:39.771807 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:22:39.771813 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:22:39.771819 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:22:39.771825 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:22:39.771831 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:22:39.771837 kernel: GICv3: 256 SPIs implemented Feb 12 19:22:39.771844 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:22:39.771850 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:22:39.771856 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:22:39.771862 kernel: GICv3: 16 PPIs implemented Feb 12 19:22:39.771868 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 12 19:22:39.771874 kernel: ACPI: SRAT not present Feb 12 19:22:39.771880 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 12 19:22:39.771886 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 19:22:39.771892 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 12 19:22:39.771898 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 12 19:22:39.771905 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 12 19:22:39.771910 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:22:39.771918 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:22:39.771924 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:22:39.771930 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:22:39.771936 kernel: arm-pv: using stolen time PV Feb 12 19:22:39.771942 kernel: Console: colour dummy device 80x25 Feb 12 19:22:39.771949 kernel: ACPI: Core revision 20210730 Feb 12 19:22:39.771955 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:22:39.771962 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:22:39.771968 kernel: LSM: Security Framework initializing Feb 12 19:22:39.771974 kernel: SELinux: Initializing. Feb 12 19:22:39.771981 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:22:39.771987 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:22:39.771994 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:22:39.772000 kernel: Platform MSI: ITS@0x8080000 domain created Feb 12 19:22:39.772006 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 12 19:22:39.772012 kernel: Remapping and enabling EFI services. Feb 12 19:22:39.772018 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:22:39.772024 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:22:39.772031 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 12 19:22:39.772039 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 12 19:22:39.772045 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:22:39.772051 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:22:39.772058 kernel: Detected PIPT I-cache on CPU2 Feb 12 19:22:39.772064 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 12 19:22:39.772071 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 12 19:22:39.772077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:22:39.772095 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 12 19:22:39.772101 kernel: Detected PIPT I-cache on CPU3 Feb 12 19:22:39.772108 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 12 19:22:39.772115 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 12 19:22:39.772121 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:22:39.772127 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 12 19:22:39.772134 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:22:39.772144 kernel: SMP: Total of 4 processors activated. Feb 12 19:22:39.772151 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:22:39.772158 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:22:39.772165 kernel: CPU features: detected: Common not Private translations Feb 12 19:22:39.772171 kernel: CPU features: detected: CRC32 instructions Feb 12 19:22:39.772177 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:22:39.772184 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:22:39.772190 kernel: CPU features: detected: Privileged Access Never Feb 12 19:22:39.772198 kernel: CPU features: detected: RAS Extension Support Feb 12 19:22:39.772205 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 12 19:22:39.772211 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:22:39.772218 kernel: alternatives: patching kernel code Feb 12 19:22:39.772224 kernel: devtmpfs: initialized Feb 12 19:22:39.772232 kernel: KASLR enabled Feb 12 19:22:39.772238 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:22:39.772245 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:22:39.772251 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:22:39.772258 kernel: SMBIOS 3.0.0 present. Feb 12 19:22:39.772264 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 12 19:22:39.772271 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:22:39.772277 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:22:39.772284 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:22:39.772292 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:22:39.772298 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:22:39.772305 kernel: audit: type=2000 audit(0.034:1): state=initialized audit_enabled=0 res=1 Feb 12 19:22:39.772312 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:22:39.772318 kernel: cpuidle: using governor menu Feb 12 19:22:39.772324 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:22:39.772331 kernel: ASID allocator initialised with 32768 entries Feb 12 19:22:39.772338 kernel: ACPI: bus type PCI registered Feb 12 19:22:39.772344 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:22:39.772352 kernel: Serial: AMBA PL011 UART driver Feb 12 19:22:39.772359 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:22:39.772365 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:22:39.772372 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:22:39.772378 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:22:39.772415 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:22:39.772423 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:22:39.772430 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:22:39.772437 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:22:39.772445 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:22:39.772452 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:22:39.772458 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:22:39.772465 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:22:39.772472 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:22:39.772478 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:22:39.772485 kernel: ACPI: Interpreter enabled Feb 12 19:22:39.772492 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:22:39.772498 kernel: ACPI: MCFG table detected, 1 entries Feb 12 19:22:39.772506 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:22:39.772513 kernel: printk: console [ttyAMA0] enabled Feb 12 19:22:39.772519 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:22:39.772844 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:22:39.772924 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 19:22:39.772986 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 19:22:39.773045 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 12 19:22:39.773124 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 12 19:22:39.773134 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 12 19:22:39.773141 kernel: PCI host bridge to bus 0000:00 Feb 12 19:22:39.773212 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 12 19:22:39.773267 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 19:22:39.773322 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 12 19:22:39.773377 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:22:39.773458 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 12 19:22:39.773535 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:22:39.773599 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 12 19:22:39.773658 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 12 19:22:39.773730 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:22:39.773792 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:22:39.773853 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 12 19:22:39.776590 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 12 19:22:39.776703 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 12 19:22:39.776760 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 19:22:39.776813 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 12 19:22:39.776822 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 19:22:39.776829 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 19:22:39.776836 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 19:22:39.776850 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 19:22:39.776857 kernel: iommu: Default domain type: Translated Feb 12 19:22:39.776864 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:22:39.776870 kernel: vgaarb: loaded Feb 12 19:22:39.776877 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:22:39.776884 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:22:39.776891 kernel: PTP clock support registered Feb 12 19:22:39.776898 kernel: Registered efivars operations Feb 12 19:22:39.776905 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:22:39.776912 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:22:39.776920 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:22:39.776927 kernel: pnp: PnP ACPI init Feb 12 19:22:39.777002 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 12 19:22:39.777012 kernel: pnp: PnP ACPI: found 1 devices Feb 12 19:22:39.777019 kernel: NET: Registered PF_INET protocol family Feb 12 19:22:39.777026 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:22:39.777033 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:22:39.777041 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:22:39.777050 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:22:39.777056 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:22:39.777063 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:22:39.777070 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:22:39.777076 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:22:39.777110 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:22:39.777120 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:22:39.777128 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 12 19:22:39.777135 kernel: kvm [1]: HYP mode not available Feb 12 19:22:39.777143 kernel: Initialise system trusted keyrings Feb 12 19:22:39.777150 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:22:39.777157 kernel: Key type asymmetric registered Feb 12 19:22:39.777163 kernel: Asymmetric key parser 'x509' registered Feb 12 19:22:39.777180 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:22:39.777187 kernel: io scheduler mq-deadline registered Feb 12 19:22:39.777193 kernel: io scheduler kyber registered Feb 12 19:22:39.777200 kernel: io scheduler bfq registered Feb 12 19:22:39.777207 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 19:22:39.777215 kernel: ACPI: button: Power Button [PWRB] Feb 12 19:22:39.777222 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 19:22:39.777298 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 12 19:22:39.777307 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:22:39.777314 kernel: thunder_xcv, ver 1.0 Feb 12 19:22:39.777321 kernel: thunder_bgx, ver 1.0 Feb 12 19:22:39.777327 kernel: nicpf, ver 1.0 Feb 12 19:22:39.777334 kernel: nicvf, ver 1.0 Feb 12 19:22:39.777403 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:22:39.777463 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:22:39 UTC (1707765759) Feb 12 19:22:39.777472 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:22:39.777478 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:22:39.777485 kernel: Segment Routing with IPv6 Feb 12 19:22:39.777492 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:22:39.777499 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:22:39.777506 kernel: Key type dns_resolver registered Feb 12 19:22:39.777512 kernel: registered taskstats version 1 Feb 12 19:22:39.777522 kernel: Loading compiled-in X.509 certificates Feb 12 19:22:39.777529 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:22:39.777536 kernel: Key type .fscrypt registered Feb 12 19:22:39.777542 kernel: Key type fscrypt-provisioning registered Feb 12 19:22:39.777549 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:22:39.777556 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:22:39.777562 kernel: ima: No architecture policies found Feb 12 19:22:39.777569 kernel: Freeing unused kernel memory: 34688K Feb 12 19:22:39.777576 kernel: Run /init as init process Feb 12 19:22:39.777584 kernel: with arguments: Feb 12 19:22:39.777591 kernel: /init Feb 12 19:22:39.777597 kernel: with environment: Feb 12 19:22:39.777604 kernel: HOME=/ Feb 12 19:22:39.777610 kernel: TERM=linux Feb 12 19:22:39.777617 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:22:39.777625 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:22:39.777635 systemd[1]: Detected virtualization kvm. Feb 12 19:22:39.777643 systemd[1]: Detected architecture arm64. Feb 12 19:22:39.777650 systemd[1]: Running in initrd. Feb 12 19:22:39.777657 systemd[1]: No hostname configured, using default hostname. Feb 12 19:22:39.777664 systemd[1]: Hostname set to . Feb 12 19:22:39.777680 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:22:39.777687 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:22:39.777694 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:22:39.777701 systemd[1]: Reached target cryptsetup.target. Feb 12 19:22:39.777711 systemd[1]: Reached target paths.target. Feb 12 19:22:39.777718 systemd[1]: Reached target slices.target. Feb 12 19:22:39.777725 systemd[1]: Reached target swap.target. Feb 12 19:22:39.777732 systemd[1]: Reached target timers.target. Feb 12 19:22:39.777740 systemd[1]: Listening on iscsid.socket. Feb 12 19:22:39.777747 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:22:39.777755 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:22:39.777763 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:22:39.777770 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:22:39.777778 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:22:39.777785 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:22:39.777792 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:22:39.777799 systemd[1]: Reached target sockets.target. Feb 12 19:22:39.777806 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:22:39.777814 systemd[1]: Finished network-cleanup.service. Feb 12 19:22:39.777820 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:22:39.777829 systemd[1]: Starting systemd-journald.service... Feb 12 19:22:39.777836 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:22:39.777843 systemd[1]: Starting systemd-resolved.service... Feb 12 19:22:39.777851 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:22:39.777858 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:22:39.777865 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:22:39.777872 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:22:39.777879 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:22:39.777887 kernel: audit: type=1130 audit(1707765759.770:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.777895 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:22:39.777907 systemd-journald[289]: Journal started Feb 12 19:22:39.777953 systemd-journald[289]: Runtime Journal (/run/log/journal/379ae8eb6a2045a09c75a8408979632c) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:22:39.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.766235 systemd-modules-load[290]: Inserted module 'overlay' Feb 12 19:22:39.779479 systemd[1]: Started systemd-journald.service. Feb 12 19:22:39.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.783415 kernel: audit: type=1130 audit(1707765759.780:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.783452 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:22:39.782391 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:22:39.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.790123 kernel: audit: type=1130 audit(1707765759.784:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.790169 kernel: Bridge firewalling registered Feb 12 19:22:39.790445 systemd-modules-load[290]: Inserted module 'br_netfilter' Feb 12 19:22:39.793185 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:22:39.794579 systemd-resolved[291]: Positive Trust Anchors: Feb 12 19:22:39.798949 kernel: audit: type=1130 audit(1707765759.794:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.794594 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:22:39.794622 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:22:39.796046 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:22:39.808562 kernel: SCSI subsystem initialized Feb 12 19:22:39.808585 kernel: audit: type=1130 audit(1707765759.805:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.801287 systemd-resolved[291]: Defaulting to hostname 'linux'. Feb 12 19:22:39.802410 systemd[1]: Started systemd-resolved.service. Feb 12 19:22:39.806051 systemd[1]: Reached target nss-lookup.target. Feb 12 19:22:39.811294 dracut-cmdline[307]: dracut-dracut-053 Feb 12 19:22:39.813568 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:22:39.813595 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:22:39.813604 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:22:39.813612 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:22:39.817750 systemd-modules-load[290]: Inserted module 'dm_multipath' Feb 12 19:22:39.818611 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:22:39.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.820077 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:22:39.822770 kernel: audit: type=1130 audit(1707765759.819:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.829131 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:22:39.832145 kernel: audit: type=1130 audit(1707765759.829:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.881114 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:22:39.891105 kernel: iscsi: registered transport (tcp) Feb 12 19:22:39.907105 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:22:39.907124 kernel: QLogic iSCSI HBA Driver Feb 12 19:22:39.945370 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:22:39.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.946910 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:22:39.949165 kernel: audit: type=1130 audit(1707765759.945:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:39.995116 kernel: raid6: neonx8 gen() 13705 MB/s Feb 12 19:22:40.012119 kernel: raid6: neonx8 xor() 10773 MB/s Feb 12 19:22:40.030114 kernel: raid6: neonx4 gen() 13369 MB/s Feb 12 19:22:40.046103 kernel: raid6: neonx4 xor() 11197 MB/s Feb 12 19:22:40.063106 kernel: raid6: neonx2 gen() 12697 MB/s Feb 12 19:22:40.080102 kernel: raid6: neonx2 xor() 10164 MB/s Feb 12 19:22:40.097098 kernel: raid6: neonx1 gen() 10420 MB/s Feb 12 19:22:40.114100 kernel: raid6: neonx1 xor() 8685 MB/s Feb 12 19:22:40.131098 kernel: raid6: int64x8 gen() 6237 MB/s Feb 12 19:22:40.148099 kernel: raid6: int64x8 xor() 3535 MB/s Feb 12 19:22:40.165098 kernel: raid6: int64x4 gen() 7190 MB/s Feb 12 19:22:40.182098 kernel: raid6: int64x4 xor() 3842 MB/s Feb 12 19:22:40.199100 kernel: raid6: int64x2 gen() 6115 MB/s Feb 12 19:22:40.216099 kernel: raid6: int64x2 xor() 3312 MB/s Feb 12 19:22:40.233100 kernel: raid6: int64x1 gen() 5027 MB/s Feb 12 19:22:40.250378 kernel: raid6: int64x1 xor() 2633 MB/s Feb 12 19:22:40.250394 kernel: raid6: using algorithm neonx8 gen() 13705 MB/s Feb 12 19:22:40.250404 kernel: raid6: .... xor() 10773 MB/s, rmw enabled Feb 12 19:22:40.250412 kernel: raid6: using neon recovery algorithm Feb 12 19:22:40.261104 kernel: xor: measuring software checksum speed Feb 12 19:22:40.262099 kernel: 8regs : 17300 MB/sec Feb 12 19:22:40.263494 kernel: 32regs : 20728 MB/sec Feb 12 19:22:40.263505 kernel: arm64_neon : 27873 MB/sec Feb 12 19:22:40.263514 kernel: xor: using function: arm64_neon (27873 MB/sec) Feb 12 19:22:40.324115 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:22:40.334738 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:22:40.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:40.336454 systemd[1]: Starting systemd-udevd.service... Feb 12 19:22:40.339244 kernel: audit: type=1130 audit(1707765760.335:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:40.335000 audit: BPF prog-id=7 op=LOAD Feb 12 19:22:40.335000 audit: BPF prog-id=8 op=LOAD Feb 12 19:22:40.355273 systemd-udevd[492]: Using default interface naming scheme 'v252'. Feb 12 19:22:40.358749 systemd[1]: Started systemd-udevd.service. Feb 12 19:22:40.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:40.360398 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:22:40.373582 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Feb 12 19:22:40.402657 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:22:40.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:40.404197 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:22:40.438818 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:22:40.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:40.475743 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:22:40.478365 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:22:40.478391 kernel: GPT:9289727 != 19775487 Feb 12 19:22:40.478400 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:22:40.479323 kernel: GPT:9289727 != 19775487 Feb 12 19:22:40.479336 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:22:40.479345 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:22:40.493113 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (543) Feb 12 19:22:40.498884 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:22:40.501600 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:22:40.502597 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:22:40.506860 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:22:40.510210 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:22:40.511877 systemd[1]: Starting disk-uuid.service... Feb 12 19:22:40.518844 disk-uuid[563]: Primary Header is updated. Feb 12 19:22:40.518844 disk-uuid[563]: Secondary Entries is updated. Feb 12 19:22:40.518844 disk-uuid[563]: Secondary Header is updated. Feb 12 19:22:40.526135 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:22:41.533461 disk-uuid[564]: The operation has completed successfully. Feb 12 19:22:41.534926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:22:41.556737 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:22:41.557917 systemd[1]: Finished disk-uuid.service. Feb 12 19:22:41.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.560524 systemd[1]: Starting verity-setup.service... Feb 12 19:22:41.581289 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:22:41.605462 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:22:41.607198 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:22:41.607958 systemd[1]: Finished verity-setup.service. Feb 12 19:22:41.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.668107 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:22:41.668577 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:22:41.669354 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:22:41.670047 systemd[1]: Starting ignition-setup.service... Feb 12 19:22:41.671928 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:22:41.679180 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:22:41.679285 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:22:41.679313 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:22:41.687927 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:22:41.694637 systemd[1]: Finished ignition-setup.service. Feb 12 19:22:41.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.696100 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:22:41.762307 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:22:41.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.763000 audit: BPF prog-id=9 op=LOAD Feb 12 19:22:41.764696 systemd[1]: Starting systemd-networkd.service... Feb 12 19:22:41.797753 systemd-networkd[740]: lo: Link UP Feb 12 19:22:41.797764 systemd-networkd[740]: lo: Gained carrier Feb 12 19:22:41.798150 systemd-networkd[740]: Enumeration completed Feb 12 19:22:41.798319 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:22:41.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.799271 systemd-networkd[740]: eth0: Link UP Feb 12 19:22:41.799274 systemd-networkd[740]: eth0: Gained carrier Feb 12 19:22:41.799541 systemd[1]: Started systemd-networkd.service. Feb 12 19:22:41.800681 systemd[1]: Reached target network.target. Feb 12 19:22:41.805182 ignition[653]: Ignition 2.14.0 Feb 12 19:22:41.803147 systemd[1]: Starting iscsiuio.service... Feb 12 19:22:41.805189 ignition[653]: Stage: fetch-offline Feb 12 19:22:41.805235 ignition[653]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:22:41.805243 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:22:41.805397 ignition[653]: parsed url from cmdline: "" Feb 12 19:22:41.805400 ignition[653]: no config URL provided Feb 12 19:22:41.805405 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:22:41.805412 ignition[653]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:22:41.805431 ignition[653]: op(1): [started] loading QEMU firmware config module Feb 12 19:22:41.805436 ignition[653]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:22:41.814134 systemd[1]: Started iscsiuio.service. Feb 12 19:22:41.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.815522 ignition[653]: op(1): [finished] loading QEMU firmware config module Feb 12 19:22:41.815873 systemd[1]: Starting iscsid.service... Feb 12 19:22:41.821043 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:22:41.821043 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:22:41.821043 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:22:41.821043 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:22:41.821043 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:22:41.821043 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:22:41.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.825203 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:22:41.830445 systemd[1]: Started iscsid.service. Feb 12 19:22:41.832579 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:22:41.845875 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:22:41.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.846899 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:22:41.848143 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:22:41.849582 systemd[1]: Reached target remote-fs.target. Feb 12 19:22:41.851969 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:22:41.860701 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:22:41.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.906623 ignition[653]: parsing config with SHA512: 87110d77f024d27c3636890973c753ab2ffe54e4d9be87713535fd2b48ccbad604f1768fada88b34e75e6af66ce15defafdb457f99d1fdbf898169f53cd2805a Feb 12 19:22:41.955061 unknown[653]: fetched base config from "system" Feb 12 19:22:41.955074 unknown[653]: fetched user config from "qemu" Feb 12 19:22:41.955766 ignition[653]: fetch-offline: fetch-offline passed Feb 12 19:22:41.955831 ignition[653]: Ignition finished successfully Feb 12 19:22:41.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.956891 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:22:41.957954 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:22:41.959048 systemd[1]: Starting ignition-kargs.service... Feb 12 19:22:41.969826 ignition[762]: Ignition 2.14.0 Feb 12 19:22:41.969837 ignition[762]: Stage: kargs Feb 12 19:22:41.969961 ignition[762]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:22:41.969971 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:22:41.971544 ignition[762]: kargs: kargs passed Feb 12 19:22:41.972840 systemd[1]: Finished ignition-kargs.service. Feb 12 19:22:41.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.971599 ignition[762]: Ignition finished successfully Feb 12 19:22:41.974611 systemd[1]: Starting ignition-disks.service... Feb 12 19:22:41.981888 ignition[768]: Ignition 2.14.0 Feb 12 19:22:41.981897 ignition[768]: Stage: disks Feb 12 19:22:41.982005 ignition[768]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:22:41.982016 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:22:41.983552 ignition[768]: disks: disks passed Feb 12 19:22:41.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:41.985353 systemd[1]: Finished ignition-disks.service. Feb 12 19:22:41.983604 ignition[768]: Ignition finished successfully Feb 12 19:22:41.986774 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:22:41.988804 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:22:41.989879 systemd[1]: Reached target local-fs.target. Feb 12 19:22:41.990973 systemd[1]: Reached target sysinit.target. Feb 12 19:22:41.992924 systemd[1]: Reached target basic.target. Feb 12 19:22:41.995622 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:22:42.031251 systemd-fsck[776]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:22:42.103368 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:22:42.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:42.105219 systemd[1]: Mounting sysroot.mount... Feb 12 19:22:42.112127 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:22:42.112503 systemd[1]: Mounted sysroot.mount. Feb 12 19:22:42.113131 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:22:42.115398 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:22:42.116176 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:22:42.116218 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:22:42.116240 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:22:42.118722 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:22:42.120938 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:22:42.125544 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:22:42.131803 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:22:42.136752 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:22:42.141801 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:22:42.182747 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:22:42.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:42.184456 systemd[1]: Starting ignition-mount.service... Feb 12 19:22:42.185800 systemd[1]: Starting sysroot-boot.service... Feb 12 19:22:42.190989 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:22:42.201030 ignition[829]: INFO : Ignition 2.14.0 Feb 12 19:22:42.201030 ignition[829]: INFO : Stage: mount Feb 12 19:22:42.202380 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:22:42.202380 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:22:42.204889 ignition[829]: INFO : mount: mount passed Feb 12 19:22:42.204889 ignition[829]: INFO : Ignition finished successfully Feb 12 19:22:42.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:42.206825 systemd[1]: Finished ignition-mount.service. Feb 12 19:22:42.214191 systemd[1]: Finished sysroot-boot.service. Feb 12 19:22:42.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:42.617402 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:22:42.636384 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (838) Feb 12 19:22:42.636432 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:22:42.636450 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:22:42.637444 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:22:42.641551 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:22:42.643544 systemd[1]: Starting ignition-files.service... Feb 12 19:22:42.660253 ignition[858]: INFO : Ignition 2.14.0 Feb 12 19:22:42.660253 ignition[858]: INFO : Stage: files Feb 12 19:22:42.661630 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:22:42.661630 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:22:42.663531 ignition[858]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:22:42.666862 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:22:42.666862 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:22:42.678344 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:22:42.679646 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:22:42.681231 unknown[858]: wrote ssh authorized keys file for user: core Feb 12 19:22:42.682498 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:22:42.684000 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:22:42.684000 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 12 19:22:42.723928 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:22:42.781834 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:22:42.781834 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:22:42.785516 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 12 19:22:43.015197 systemd-networkd[740]: eth0: Gained IPv6LL Feb 12 19:22:43.118968 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:22:43.318473 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 12 19:22:43.318473 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:22:43.322835 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:22:43.322835 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:22:43.555747 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:22:43.677783 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 12 19:22:43.677783 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:22:43.682110 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:22:43.682110 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:22:43.682110 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:22:43.682110 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:22:43.732124 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:22:43.999958 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 12 19:22:44.002263 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:22:44.003589 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:22:44.003589 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 12 19:22:44.028648 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:22:44.306574 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 12 19:22:44.308878 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:22:44.308878 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:22:44.308878 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:22:44.330210 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 19:22:45.056264 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:22:45.058788 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:22:45.058788 ignition[858]: INFO : files: op(10): [started] processing unit "containerd.service" Feb 12 19:22:45.058788 ignition[858]: INFO : files: op(10): op(11): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:22:45.058788 ignition[858]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:22:45.058788 ignition[858]: INFO : files: op(10): [finished] processing unit "containerd.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(18): [started] processing unit "coreos-metadata.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(18): op(19): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(18): op(19): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(18): [finished] processing unit "coreos-metadata.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:22:45.087908 ignition[858]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:22:45.116134 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:22:45.116158 kernel: audit: type=1130 audit(1707765765.106:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.116231 ignition[858]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:22:45.116231 ignition[858]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:22:45.116231 ignition[858]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:22:45.116231 ignition[858]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:22:45.116231 ignition[858]: INFO : files: op(1d): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:22:45.116231 ignition[858]: INFO : files: op(1d): op(1e): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:22:45.116231 ignition[858]: INFO : files: op(1d): op(1e): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:22:45.116231 ignition[858]: INFO : files: op(1d): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:22:45.116231 ignition[858]: INFO : files: createResultFile: createFiles: op(1f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:22:45.116231 ignition[858]: INFO : files: createResultFile: createFiles: op(1f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:22:45.116231 ignition[858]: INFO : files: files passed Feb 12 19:22:45.116231 ignition[858]: INFO : Ignition finished successfully Feb 12 19:22:45.135806 kernel: audit: type=1130 audit(1707765765.117:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.135829 kernel: audit: type=1131 audit(1707765765.117:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.135839 kernel: audit: type=1130 audit(1707765765.123:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.105966 systemd[1]: Finished ignition-files.service. Feb 12 19:22:45.107785 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:22:45.137466 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:22:45.111295 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:22:45.146498 kernel: audit: type=1130 audit(1707765765.141:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.146567 kernel: audit: type=1131 audit(1707765765.141:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.146862 initrd-setup-root-after-ignition[886]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:22:45.111987 systemd[1]: Starting ignition-quench.service... Feb 12 19:22:45.116377 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:22:45.116461 systemd[1]: Finished ignition-quench.service. Feb 12 19:22:45.118248 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:22:45.123389 systemd[1]: Reached target ignition-complete.target. Feb 12 19:22:45.127517 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:22:45.140644 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:22:45.140752 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:22:45.141815 systemd[1]: Reached target initrd-fs.target. Feb 12 19:22:45.146971 systemd[1]: Reached target initrd.target. Feb 12 19:22:45.148627 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:22:45.149434 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:22:45.160444 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:22:45.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.162119 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:22:45.164589 kernel: audit: type=1130 audit(1707765765.161:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.171035 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:22:45.171892 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:22:45.173045 systemd[1]: Stopped target timers.target. Feb 12 19:22:45.174146 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:22:45.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.174271 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:22:45.178580 kernel: audit: type=1131 audit(1707765765.175:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.175343 systemd[1]: Stopped target initrd.target. Feb 12 19:22:45.178193 systemd[1]: Stopped target basic.target. Feb 12 19:22:45.179164 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:22:45.180597 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:22:45.181596 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:22:45.182844 systemd[1]: Stopped target remote-fs.target. Feb 12 19:22:45.183964 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:22:45.185197 systemd[1]: Stopped target sysinit.target. Feb 12 19:22:45.186271 systemd[1]: Stopped target local-fs.target. Feb 12 19:22:45.187355 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:22:45.188341 systemd[1]: Stopped target swap.target. Feb 12 19:22:45.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.189376 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:22:45.193798 kernel: audit: type=1131 audit(1707765765.190:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.189493 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:22:45.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.190524 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:22:45.198061 kernel: audit: type=1131 audit(1707765765.194:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.193251 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:22:45.193354 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:22:45.194662 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:22:45.194769 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:22:45.197741 systemd[1]: Stopped target paths.target. Feb 12 19:22:45.198827 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:22:45.201592 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:22:45.202529 systemd[1]: Stopped target slices.target. Feb 12 19:22:45.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.203675 systemd[1]: Stopped target sockets.target. Feb 12 19:22:45.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.205006 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:22:45.205146 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:22:45.211157 iscsid[747]: iscsid shutting down. Feb 12 19:22:45.206407 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:22:45.206499 systemd[1]: Stopped ignition-files.service. Feb 12 19:22:45.208394 systemd[1]: Stopping ignition-mount.service... Feb 12 19:22:45.209383 systemd[1]: Stopping iscsid.service... Feb 12 19:22:45.215559 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:22:45.216751 ignition[900]: INFO : Ignition 2.14.0 Feb 12 19:22:45.216751 ignition[900]: INFO : Stage: umount Feb 12 19:22:45.216751 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:22:45.216751 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:22:45.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.216144 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:22:45.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.223423 ignition[900]: INFO : umount: umount passed Feb 12 19:22:45.223423 ignition[900]: INFO : Ignition finished successfully Feb 12 19:22:45.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.216272 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:22:45.217473 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:22:45.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.217569 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:22:45.220065 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:22:45.220171 systemd[1]: Stopped iscsid.service. Feb 12 19:22:45.221196 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:22:45.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.221271 systemd[1]: Stopped ignition-mount.service. Feb 12 19:22:45.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.222888 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:22:45.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.222958 systemd[1]: Closed iscsid.socket. Feb 12 19:22:45.223933 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:22:45.223974 systemd[1]: Stopped ignition-disks.service. Feb 12 19:22:45.225364 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:22:45.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.225402 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:22:45.226714 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:22:45.226752 systemd[1]: Stopped ignition-setup.service. Feb 12 19:22:45.228129 systemd[1]: Stopping iscsiuio.service... Feb 12 19:22:45.231531 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:22:45.231999 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:22:45.232090 systemd[1]: Stopped iscsiuio.service. Feb 12 19:22:45.233275 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:22:45.233350 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:22:45.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.234614 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:22:45.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.234699 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:22:45.236269 systemd[1]: Stopped target network.target. Feb 12 19:22:45.237353 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:22:45.252000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:22:45.237384 systemd[1]: Closed iscsiuio.socket. Feb 12 19:22:45.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.238585 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:22:45.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.238626 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:22:45.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.239877 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:22:45.241039 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:22:45.246916 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:22:45.247014 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:22:45.247133 systemd-networkd[740]: eth0: DHCPv6 lease lost Feb 12 19:22:45.262000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:22:45.248470 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:22:45.248559 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:22:45.249516 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:22:45.249545 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:22:45.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.251326 systemd[1]: Stopping network-cleanup.service... Feb 12 19:22:45.252465 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:22:45.252521 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:22:45.253903 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:22:45.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.253943 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:22:45.255863 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:22:45.255905 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:22:45.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.256823 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:22:45.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.262296 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:22:45.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.265114 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:22:45.265207 systemd[1]: Stopped network-cleanup.service. Feb 12 19:22:45.268849 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:22:45.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.268987 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:22:45.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.270329 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:22:45.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.270371 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:22:45.271469 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:22:45.271502 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:22:45.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:45.272644 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:22:45.272700 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:22:45.274125 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:22:45.274167 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:22:45.275251 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:22:45.275290 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:22:45.277199 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:22:45.278606 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:22:45.278679 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:22:45.280484 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:22:45.280519 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:22:45.281380 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:22:45.281418 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:22:45.283507 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 19:22:45.284007 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:22:45.284108 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:22:45.285386 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:22:45.287267 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:22:45.296398 systemd[1]: Switching root. Feb 12 19:22:45.301000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:22:45.301000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:22:45.301000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:22:45.301000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:22:45.301000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:22:45.308761 systemd-journald[289]: Journal stopped Feb 12 19:22:47.664166 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Feb 12 19:22:47.664257 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:22:47.664283 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:22:47.664298 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:22:47.664308 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:22:47.664317 kernel: SELinux: policy capability open_perms=1 Feb 12 19:22:47.664328 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:22:47.664341 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:22:47.664351 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:22:47.664361 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:22:47.664370 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:22:47.664382 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:22:47.664394 systemd[1]: Successfully loaded SELinux policy in 33.674ms. Feb 12 19:22:47.664415 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.166ms. Feb 12 19:22:47.664430 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:22:47.664441 systemd[1]: Detected virtualization kvm. Feb 12 19:22:47.664453 systemd[1]: Detected architecture arm64. Feb 12 19:22:47.664464 systemd[1]: Detected first boot. Feb 12 19:22:47.664491 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:22:47.664505 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:22:47.664516 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:22:47.664530 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:22:47.664541 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:22:47.664554 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:22:47.664565 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:22:47.664578 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:22:47.664598 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:22:47.664619 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:22:47.664631 systemd[1]: Created slice system-getty.slice. Feb 12 19:22:47.664647 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:22:47.664659 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:22:47.664670 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:22:47.664681 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:22:47.664692 systemd[1]: Created slice user.slice. Feb 12 19:22:47.664704 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:22:47.664715 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:22:47.664726 systemd[1]: Set up automount boot.automount. Feb 12 19:22:47.664737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:22:47.664748 systemd[1]: Reached target integritysetup.target. Feb 12 19:22:47.664759 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:22:47.664770 systemd[1]: Reached target remote-fs.target. Feb 12 19:22:47.664781 systemd[1]: Reached target slices.target. Feb 12 19:22:47.664796 systemd[1]: Reached target swap.target. Feb 12 19:22:47.664807 systemd[1]: Reached target torcx.target. Feb 12 19:22:47.664817 systemd[1]: Reached target veritysetup.target. Feb 12 19:22:47.664828 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:22:47.664839 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:22:47.664850 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:22:47.664860 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:22:47.664871 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:22:47.664882 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:22:47.664893 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:22:47.664909 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:22:47.664920 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:22:47.664934 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:22:47.664945 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:22:47.664956 systemd[1]: Mounting media.mount... Feb 12 19:22:47.664966 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:22:47.664977 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:22:47.664988 systemd[1]: Mounting tmp.mount... Feb 12 19:22:47.664999 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:22:47.665011 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:22:47.665022 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:22:47.665034 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:22:47.665045 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:22:47.665056 systemd[1]: Starting modprobe@drm.service... Feb 12 19:22:47.665067 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:22:47.665077 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:22:47.665108 systemd[1]: Starting modprobe@loop.service... Feb 12 19:22:47.665122 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:22:47.665146 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:22:47.665158 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:22:47.665169 systemd[1]: Starting systemd-journald.service... Feb 12 19:22:47.665185 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:22:47.665197 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:22:47.665208 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:22:47.665223 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:22:47.665233 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:22:47.665244 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:22:47.665257 systemd[1]: Mounted media.mount. Feb 12 19:22:47.665269 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:22:47.665285 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:22:47.665296 systemd[1]: Mounted tmp.mount. Feb 12 19:22:47.665307 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:22:47.665324 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:22:47.665335 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:22:47.665345 kernel: fuse: init (API version 7.34) Feb 12 19:22:47.665355 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:22:47.665366 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:22:47.665376 kernel: loop: module loaded Feb 12 19:22:47.665389 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:22:47.665402 systemd[1]: Finished modprobe@drm.service. Feb 12 19:22:47.665413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:22:47.665424 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:22:47.665435 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:22:47.665445 systemd[1]: Finished modprobe@loop.service. Feb 12 19:22:47.665457 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:22:47.665468 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:22:47.665479 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:22:47.665490 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:22:47.665501 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:22:47.665512 systemd[1]: Reached target network-pre.target. Feb 12 19:22:47.665528 systemd-journald[1027]: Journal started Feb 12 19:22:47.665579 systemd-journald[1027]: Runtime Journal (/run/log/journal/379ae8eb6a2045a09c75a8408979632c) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:22:47.517000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:22:47.517000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:22:47.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.648000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:22:47.648000 audit[1027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe77ac940 a2=4000 a3=1 items=0 ppid=1 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:47.648000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:22:47.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.671154 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:22:47.671228 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:22:47.676844 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:22:47.676928 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:22:47.677121 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:22:47.681263 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:22:47.685122 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:22:47.685181 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:22:47.690338 systemd[1]: Started systemd-journald.service. Feb 12 19:22:47.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.691769 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:22:47.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.693030 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:22:47.694032 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:22:47.697075 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:22:47.699972 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:22:47.702038 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:22:47.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.703939 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:22:47.708949 systemd-journald[1027]: Time spent on flushing to /var/log/journal/379ae8eb6a2045a09c75a8408979632c is 12.228ms for 973 entries. Feb 12 19:22:47.708949 systemd-journald[1027]: System Journal (/var/log/journal/379ae8eb6a2045a09c75a8408979632c) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:22:47.735530 systemd-journald[1027]: Received client request to flush runtime journal. Feb 12 19:22:47.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.736560 udevadm[1075]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:22:47.724037 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:22:47.727810 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:22:47.730367 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:22:47.736687 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:22:47.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:47.757798 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:22:47.760877 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:22:47.796010 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:22:47.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.107532 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:22:48.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.109630 systemd[1]: Starting systemd-udevd.service... Feb 12 19:22:48.131508 systemd-udevd[1097]: Using default interface naming scheme 'v252'. Feb 12 19:22:48.148525 systemd[1]: Started systemd-udevd.service. Feb 12 19:22:48.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.150747 systemd[1]: Starting systemd-networkd.service... Feb 12 19:22:48.162138 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:22:48.165659 systemd[1]: Found device dev-ttyAMA0.device. Feb 12 19:22:48.205675 systemd[1]: Started systemd-userdbd.service. Feb 12 19:22:48.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.219337 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:22:48.261538 systemd-networkd[1105]: lo: Link UP Feb 12 19:22:48.261868 systemd-networkd[1105]: lo: Gained carrier Feb 12 19:22:48.262410 systemd-networkd[1105]: Enumeration completed Feb 12 19:22:48.262528 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:22:48.262695 systemd-networkd[1105]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:22:48.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.263472 systemd[1]: Started systemd-networkd.service. Feb 12 19:22:48.265550 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:22:48.266010 systemd-networkd[1105]: eth0: Link UP Feb 12 19:22:48.266114 systemd-networkd[1105]: eth0: Gained carrier Feb 12 19:22:48.280759 lvm[1131]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:22:48.282251 systemd-networkd[1105]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:22:48.311072 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:22:48.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.311894 systemd[1]: Reached target cryptsetup.target. Feb 12 19:22:48.313876 systemd[1]: Starting lvm2-activation.service... Feb 12 19:22:48.317664 lvm[1133]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:22:48.344188 systemd[1]: Finished lvm2-activation.service. Feb 12 19:22:48.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.344945 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:22:48.345602 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:22:48.345630 systemd[1]: Reached target local-fs.target. Feb 12 19:22:48.346191 systemd[1]: Reached target machines.target. Feb 12 19:22:48.348053 systemd[1]: Starting ldconfig.service... Feb 12 19:22:48.349168 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:22:48.349229 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:22:48.350493 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:22:48.352364 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:22:48.354920 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:22:48.356081 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:22:48.356151 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:22:48.357445 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:22:48.361466 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1136 (bootctl) Feb 12 19:22:48.362757 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:22:48.370346 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:22:48.371236 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:22:48.372337 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:22:48.373672 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:22:48.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.546913 systemd-fsck[1145]: fsck.fat 4.2 (2021-01-31) Feb 12 19:22:48.546913 systemd-fsck[1145]: /dev/vda1: 236 files, 113719/258078 clusters Feb 12 19:22:48.552435 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:22:48.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.556962 systemd[1]: Mounting boot.mount... Feb 12 19:22:48.567270 systemd[1]: Mounted boot.mount. Feb 12 19:22:48.568390 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:22:48.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.580017 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:22:48.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.620515 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:22:48.656615 ldconfig[1135]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:22:48.660938 systemd[1]: Finished ldconfig.service. Feb 12 19:22:48.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.663137 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:22:48.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.665174 systemd[1]: Starting audit-rules.service... Feb 12 19:22:48.667170 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:22:48.669185 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:22:48.671805 systemd[1]: Starting systemd-resolved.service... Feb 12 19:22:48.674115 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:22:48.678363 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:22:48.679909 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:22:48.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.680000 audit[1166]: SYSTEM_BOOT pid=1166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.681024 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:22:48.686605 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:22:48.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.701711 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:22:48.703995 systemd[1]: Starting systemd-update-done.service... Feb 12 19:22:48.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.713050 systemd[1]: Finished systemd-update-done.service. Feb 12 19:22:48.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:48.720000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:22:48.720000 audit[1179]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc9b52540 a2=420 a3=0 items=0 ppid=1154 pid=1179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:48.720000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:22:48.721191 augenrules[1179]: No rules Feb 12 19:22:48.721902 systemd[1]: Finished audit-rules.service. Feb 12 19:22:48.737698 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:22:48.738812 systemd-timesyncd[1163]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:22:48.738866 systemd-timesyncd[1163]: Initial clock synchronization to Mon 2024-02-12 19:22:48.954463 UTC. Feb 12 19:22:48.738954 systemd[1]: Reached target time-set.target. Feb 12 19:22:48.742522 systemd-resolved[1159]: Positive Trust Anchors: Feb 12 19:22:48.742832 systemd-resolved[1159]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:22:48.742916 systemd-resolved[1159]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:22:48.752756 systemd-resolved[1159]: Defaulting to hostname 'linux'. Feb 12 19:22:48.754300 systemd[1]: Started systemd-resolved.service. Feb 12 19:22:48.754975 systemd[1]: Reached target network.target. Feb 12 19:22:48.755575 systemd[1]: Reached target nss-lookup.target. Feb 12 19:22:48.756150 systemd[1]: Reached target sysinit.target. Feb 12 19:22:48.756762 systemd[1]: Started motdgen.path. Feb 12 19:22:48.757286 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:22:48.758247 systemd[1]: Started logrotate.timer. Feb 12 19:22:48.759068 systemd[1]: Started mdadm.timer. Feb 12 19:22:48.759718 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:22:48.760480 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:22:48.760516 systemd[1]: Reached target paths.target. Feb 12 19:22:48.761208 systemd[1]: Reached target timers.target. Feb 12 19:22:48.762221 systemd[1]: Listening on dbus.socket. Feb 12 19:22:48.764118 systemd[1]: Starting docker.socket... Feb 12 19:22:48.765748 systemd[1]: Listening on sshd.socket. Feb 12 19:22:48.766530 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:22:48.766921 systemd[1]: Listening on docker.socket. Feb 12 19:22:48.767692 systemd[1]: Reached target sockets.target. Feb 12 19:22:48.768383 systemd[1]: Reached target basic.target. Feb 12 19:22:48.769234 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:22:48.769286 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:22:48.769305 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:22:48.770468 systemd[1]: Starting containerd.service... Feb 12 19:22:48.772359 systemd[1]: Starting dbus.service... Feb 12 19:22:48.774529 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:22:48.776605 systemd[1]: Starting extend-filesystems.service... Feb 12 19:22:48.778940 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:22:48.780399 systemd[1]: Starting motdgen.service... Feb 12 19:22:48.783116 jq[1191]: false Feb 12 19:22:48.783177 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:22:48.785024 systemd[1]: Starting prepare-critools.service... Feb 12 19:22:48.787003 systemd[1]: Starting prepare-helm.service... Feb 12 19:22:48.788827 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:22:48.791128 systemd[1]: Starting sshd-keygen.service... Feb 12 19:22:48.793615 systemd[1]: Starting systemd-logind.service... Feb 12 19:22:48.796391 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:22:48.796470 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:22:48.797791 systemd[1]: Starting update-engine.service... Feb 12 19:22:48.799729 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:22:48.802211 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:22:48.817190 jq[1214]: true Feb 12 19:22:48.802545 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:22:48.807710 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:22:48.807958 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:22:48.813190 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:22:48.813424 systemd[1]: Finished motdgen.service. Feb 12 19:22:48.818797 tar[1222]: linux-arm64/helm Feb 12 19:22:48.819360 tar[1219]: crictl Feb 12 19:22:48.825422 jq[1223]: true Feb 12 19:22:48.825555 tar[1218]: ./ Feb 12 19:22:48.825555 tar[1218]: ./macvlan Feb 12 19:22:48.831477 extend-filesystems[1192]: Found vda Feb 12 19:22:48.831477 extend-filesystems[1192]: Found vda1 Feb 12 19:22:48.831477 extend-filesystems[1192]: Found vda2 Feb 12 19:22:48.831477 extend-filesystems[1192]: Found vda3 Feb 12 19:22:48.831477 extend-filesystems[1192]: Found usr Feb 12 19:22:48.831477 extend-filesystems[1192]: Found vda4 Feb 12 19:22:48.831477 extend-filesystems[1192]: Found vda6 Feb 12 19:22:48.831477 extend-filesystems[1192]: Found vda7 Feb 12 19:22:48.831477 extend-filesystems[1192]: Found vda9 Feb 12 19:22:48.831477 extend-filesystems[1192]: Checking size of /dev/vda9 Feb 12 19:22:48.871140 extend-filesystems[1192]: Resized partition /dev/vda9 Feb 12 19:22:48.874684 dbus-daemon[1190]: [system] SELinux support is enabled Feb 12 19:22:48.875223 systemd[1]: Started dbus.service. Feb 12 19:22:48.877826 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:22:48.877850 systemd[1]: Reached target system-config.target. Feb 12 19:22:48.878644 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:22:48.878671 systemd[1]: Reached target user-config.target. Feb 12 19:22:48.880857 extend-filesystems[1255]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:22:48.908216 tar[1218]: ./static Feb 12 19:22:48.913177 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:22:48.918900 systemd-logind[1207]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 19:22:48.919096 systemd-logind[1207]: New seat seat0. Feb 12 19:22:48.920382 systemd[1]: Started systemd-logind.service. Feb 12 19:22:48.930746 update_engine[1213]: I0212 19:22:48.930410 1213 main.cc:92] Flatcar Update Engine starting Feb 12 19:22:48.957366 update_engine[1213]: I0212 19:22:48.935232 1213 update_check_scheduler.cc:74] Next update check in 8m13s Feb 12 19:22:48.933199 systemd[1]: Started update-engine.service. Feb 12 19:22:48.935811 systemd[1]: Started locksmithd.service. Feb 12 19:22:48.958818 env[1229]: time="2024-02-12T19:22:48.958763080Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:22:48.961683 bash[1247]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:22:48.962495 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:22:48.971111 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:22:48.980051 env[1229]: time="2024-02-12T19:22:48.979998640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:22:48.991919 extend-filesystems[1255]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:22:48.991919 extend-filesystems[1255]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:22:48.991919 extend-filesystems[1255]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:22:48.997436 extend-filesystems[1192]: Resized filesystem in /dev/vda9 Feb 12 19:22:48.994670 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:22:48.998361 env[1229]: time="2024-02-12T19:22:48.991924040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:48.998361 env[1229]: time="2024-02-12T19:22:48.993771280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:22:48.998361 env[1229]: time="2024-02-12T19:22:48.993865680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:48.998361 env[1229]: time="2024-02-12T19:22:48.994228920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:22:48.998361 env[1229]: time="2024-02-12T19:22:48.994249200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:48.998361 env[1229]: time="2024-02-12T19:22:48.994264440Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:22:48.998361 env[1229]: time="2024-02-12T19:22:48.994275160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:48.998361 env[1229]: time="2024-02-12T19:22:48.994359440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:48.998361 env[1229]: time="2024-02-12T19:22:48.994654040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:22:48.998361 env[1229]: time="2024-02-12T19:22:48.994805440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:22:48.994911 systemd[1]: Finished extend-filesystems.service. Feb 12 19:22:48.998628 env[1229]: time="2024-02-12T19:22:48.994820840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:22:48.998628 env[1229]: time="2024-02-12T19:22:48.994879000Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:22:48.998628 env[1229]: time="2024-02-12T19:22:48.994892520Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:22:49.007937 tar[1218]: ./vlan Feb 12 19:22:49.031601 env[1229]: time="2024-02-12T19:22:49.031542638Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:22:49.031601 env[1229]: time="2024-02-12T19:22:49.031597724Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:22:49.031769 env[1229]: time="2024-02-12T19:22:49.031613704Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:22:49.031769 env[1229]: time="2024-02-12T19:22:49.031648620Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:22:49.031769 env[1229]: time="2024-02-12T19:22:49.031665791Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:22:49.031769 env[1229]: time="2024-02-12T19:22:49.031687111Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:22:49.031769 env[1229]: time="2024-02-12T19:22:49.031701858Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:22:49.032101 env[1229]: time="2024-02-12T19:22:49.032069512Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:22:49.032101 env[1229]: time="2024-02-12T19:22:49.032092639Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:22:49.032176 env[1229]: time="2024-02-12T19:22:49.032108455Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:22:49.032176 env[1229]: time="2024-02-12T19:22:49.032136470Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:22:49.032176 env[1229]: time="2024-02-12T19:22:49.032150560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:22:49.032331 env[1229]: time="2024-02-12T19:22:49.032303784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:22:49.032412 env[1229]: time="2024-02-12T19:22:49.032390377Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.032795248Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.032849349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.032865205Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033010131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033128642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033145403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033157973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033172145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033185783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033205624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033221275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033237706Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033410031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033429379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034173 env[1229]: time="2024-02-12T19:22:49.033453944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034573 env[1229]: time="2024-02-12T19:22:49.033468650Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:22:49.034573 env[1229]: time="2024-02-12T19:22:49.033487382Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:22:49.034573 env[1229]: time="2024-02-12T19:22:49.033501143Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:22:49.034573 env[1229]: time="2024-02-12T19:22:49.033540620Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:22:49.034573 env[1229]: time="2024-02-12T19:22:49.033582027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:22:49.034759 env[1229]: time="2024-02-12T19:22:49.033812561Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:22:49.034759 env[1229]: time="2024-02-12T19:22:49.033878204Z" level=info msg="Connect containerd service" Feb 12 19:22:49.034759 env[1229]: time="2024-02-12T19:22:49.033922117Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:22:49.041809 env[1229]: time="2024-02-12T19:22:49.041762262Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:22:49.043749 env[1229]: time="2024-02-12T19:22:49.042188576Z" level=info msg="Start subscribing containerd event" Feb 12 19:22:49.043749 env[1229]: time="2024-02-12T19:22:49.042249167Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:22:49.043749 env[1229]: time="2024-02-12T19:22:49.042263133Z" level=info msg="Start recovering state" Feb 12 19:22:49.043749 env[1229]: time="2024-02-12T19:22:49.042293696Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:22:49.043749 env[1229]: time="2024-02-12T19:22:49.042349645Z" level=info msg="containerd successfully booted in 0.096392s" Feb 12 19:22:49.043749 env[1229]: time="2024-02-12T19:22:49.043356441Z" level=info msg="Start event monitor" Feb 12 19:22:49.043749 env[1229]: time="2024-02-12T19:22:49.043385648Z" level=info msg="Start snapshots syncer" Feb 12 19:22:49.043749 env[1229]: time="2024-02-12T19:22:49.043397150Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:22:49.043749 env[1229]: time="2024-02-12T19:22:49.043421715Z" level=info msg="Start streaming server" Feb 12 19:22:49.042465 systemd[1]: Started containerd.service. Feb 12 19:22:49.063873 tar[1218]: ./portmap Feb 12 19:22:49.116160 tar[1218]: ./host-local Feb 12 19:22:49.151338 tar[1218]: ./vrf Feb 12 19:22:49.201927 locksmithd[1258]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:22:49.206537 tar[1218]: ./bridge Feb 12 19:22:49.236383 tar[1218]: ./tuning Feb 12 19:22:49.260653 tar[1218]: ./firewall Feb 12 19:22:49.291126 tar[1218]: ./host-device Feb 12 19:22:49.318199 tar[1218]: ./sbr Feb 12 19:22:49.342829 tar[1218]: ./loopback Feb 12 19:22:49.366816 tar[1218]: ./dhcp Feb 12 19:22:49.384886 systemd[1]: Finished prepare-critools.service. Feb 12 19:22:49.432124 tar[1222]: linux-arm64/LICENSE Feb 12 19:22:49.432252 tar[1222]: linux-arm64/README.md Feb 12 19:22:49.436718 tar[1218]: ./ptp Feb 12 19:22:49.437587 systemd[1]: Finished prepare-helm.service. Feb 12 19:22:49.463132 tar[1218]: ./ipvlan Feb 12 19:22:49.491281 tar[1218]: ./bandwidth Feb 12 19:22:49.532336 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:22:50.251324 systemd-networkd[1105]: eth0: Gained IPv6LL Feb 12 19:22:50.973353 sshd_keygen[1216]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:22:50.990992 systemd[1]: Finished sshd-keygen.service. Feb 12 19:22:50.993298 systemd[1]: Starting issuegen.service... Feb 12 19:22:50.997968 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:22:50.998395 systemd[1]: Finished issuegen.service. Feb 12 19:22:51.000563 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:22:51.006606 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:22:51.008834 systemd[1]: Started getty@tty1.service. Feb 12 19:22:51.010610 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:22:51.011534 systemd[1]: Reached target getty.target. Feb 12 19:22:51.012174 systemd[1]: Reached target multi-user.target. Feb 12 19:22:51.014139 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:22:51.020432 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:22:51.020647 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:22:51.021625 systemd[1]: Startup finished in 6.429s (kernel) + 5.640s (userspace) = 12.070s. Feb 12 19:22:52.215533 systemd[1]: Created slice system-sshd.slice. Feb 12 19:22:52.216711 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:58474.service. Feb 12 19:22:52.271200 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 58474 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:22:52.273202 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:22:52.281360 systemd[1]: Created slice user-500.slice. Feb 12 19:22:52.282610 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:22:52.284510 systemd-logind[1207]: New session 1 of user core. Feb 12 19:22:52.291267 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:22:52.292808 systemd[1]: Starting user@500.service... Feb 12 19:22:52.295633 (systemd)[1308]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:22:52.360178 systemd[1308]: Queued start job for default target default.target. Feb 12 19:22:52.360423 systemd[1308]: Reached target paths.target. Feb 12 19:22:52.360438 systemd[1308]: Reached target sockets.target. Feb 12 19:22:52.360449 systemd[1308]: Reached target timers.target. Feb 12 19:22:52.360471 systemd[1308]: Reached target basic.target. Feb 12 19:22:52.360516 systemd[1308]: Reached target default.target. Feb 12 19:22:52.360539 systemd[1308]: Startup finished in 59ms. Feb 12 19:22:52.360795 systemd[1]: Started user@500.service. Feb 12 19:22:52.361831 systemd[1]: Started session-1.scope. Feb 12 19:22:52.412242 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:58490.service. Feb 12 19:22:52.461271 sshd[1317]: Accepted publickey for core from 10.0.0.1 port 58490 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:22:52.462564 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:22:52.466852 systemd[1]: Started session-2.scope. Feb 12 19:22:52.467220 systemd-logind[1207]: New session 2 of user core. Feb 12 19:22:52.525224 sshd[1317]: pam_unix(sshd:session): session closed for user core Feb 12 19:22:52.527465 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:53406.service. Feb 12 19:22:52.527959 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:58490.service: Deactivated successfully. Feb 12 19:22:52.528950 systemd-logind[1207]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:22:52.528993 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:22:52.529705 systemd-logind[1207]: Removed session 2. Feb 12 19:22:52.564826 sshd[1322]: Accepted publickey for core from 10.0.0.1 port 53406 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:22:52.566393 sshd[1322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:22:52.569663 systemd-logind[1207]: New session 3 of user core. Feb 12 19:22:52.570490 systemd[1]: Started session-3.scope. Feb 12 19:22:52.621060 sshd[1322]: pam_unix(sshd:session): session closed for user core Feb 12 19:22:52.623461 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:53420.service. Feb 12 19:22:52.624061 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:53406.service: Deactivated successfully. Feb 12 19:22:52.625181 systemd-logind[1207]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:22:52.625218 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:22:52.625973 systemd-logind[1207]: Removed session 3. Feb 12 19:22:52.662369 sshd[1329]: Accepted publickey for core from 10.0.0.1 port 53420 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:22:52.663621 sshd[1329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:22:52.667253 systemd-logind[1207]: New session 4 of user core. Feb 12 19:22:52.668086 systemd[1]: Started session-4.scope. Feb 12 19:22:52.723335 sshd[1329]: pam_unix(sshd:session): session closed for user core Feb 12 19:22:52.725494 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:53426.service. Feb 12 19:22:52.726766 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:53420.service: Deactivated successfully. Feb 12 19:22:52.727770 systemd-logind[1207]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:22:52.727948 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:22:52.728656 systemd-logind[1207]: Removed session 4. Feb 12 19:22:52.769875 sshd[1336]: Accepted publickey for core from 10.0.0.1 port 53426 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:22:52.771151 sshd[1336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:22:52.774502 systemd-logind[1207]: New session 5 of user core. Feb 12 19:22:52.775498 systemd[1]: Started session-5.scope. Feb 12 19:22:52.837846 sudo[1342]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 12 19:22:52.838070 sudo[1342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:22:52.852338 dbus-daemon[1190]: avc: received setenforce notice (enforcing=1) Feb 12 19:22:52.854196 sudo[1342]: pam_unix(sudo:session): session closed for user root Feb 12 19:22:52.856536 sshd[1336]: pam_unix(sshd:session): session closed for user core Feb 12 19:22:52.859518 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:53432.service. Feb 12 19:22:52.860225 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:53426.service: Deactivated successfully. Feb 12 19:22:52.861225 systemd-logind[1207]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:22:52.861926 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:22:52.862461 systemd-logind[1207]: Removed session 5. Feb 12 19:22:52.897952 sshd[1344]: Accepted publickey for core from 10.0.0.1 port 53432 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:22:52.899991 sshd[1344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:22:52.903616 systemd-logind[1207]: New session 6 of user core. Feb 12 19:22:52.905148 systemd[1]: Started session-6.scope. Feb 12 19:22:52.958826 sudo[1351]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 12 19:22:52.959044 sudo[1351]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:22:52.961706 sudo[1351]: pam_unix(sudo:session): session closed for user root Feb 12 19:22:52.966491 sudo[1350]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 12 19:22:52.966975 sudo[1350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:22:52.976136 systemd[1]: Stopping audit-rules.service... Feb 12 19:22:52.976000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 19:22:52.977994 auditctl[1354]: No rules Feb 12 19:22:52.978319 systemd[1]: audit-rules.service: Deactivated successfully. Feb 12 19:22:52.978409 kernel: kauditd_printk_skb: 98 callbacks suppressed Feb 12 19:22:52.978443 kernel: audit: type=1305 audit(1707765772.976:131): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 19:22:52.978570 systemd[1]: Stopped audit-rules.service. Feb 12 19:22:52.976000 audit[1354]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe8908730 a2=420 a3=0 items=0 ppid=1 pid=1354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:52.980240 systemd[1]: Starting audit-rules.service... Feb 12 19:22:52.982501 kernel: audit: type=1300 audit(1707765772.976:131): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe8908730 a2=420 a3=0 items=0 ppid=1 pid=1354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:52.982563 kernel: audit: type=1327 audit(1707765772.976:131): proctitle=2F7362696E2F617564697463746C002D44 Feb 12 19:22:52.976000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 12 19:22:52.983338 kernel: audit: type=1131 audit(1707765772.977:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:52.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:52.996750 augenrules[1372]: No rules Feb 12 19:22:52.997777 systemd[1]: Finished audit-rules.service. Feb 12 19:22:52.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:52.999029 sudo[1350]: pam_unix(sudo:session): session closed for user root Feb 12 19:22:52.997000 audit[1350]: USER_END pid=1350 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:22:53.002516 kernel: audit: type=1130 audit(1707765772.996:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:53.003900 kernel: audit: type=1106 audit(1707765772.997:134): pid=1350 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:22:53.003928 kernel: audit: type=1104 audit(1707765772.998:135): pid=1350 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:22:52.998000 audit[1350]: CRED_DISP pid=1350 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:22:53.004990 sshd[1344]: pam_unix(sshd:session): session closed for user core Feb 12 19:22:53.005000 audit[1344]: USER_END pid=1344 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:22:53.007291 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:53440.service. Feb 12 19:22:53.008028 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:53432.service: Deactivated successfully. Feb 12 19:22:53.005000 audit[1344]: CRED_DISP pid=1344 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:22:53.008821 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:22:53.010234 systemd-logind[1207]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:22:53.011882 kernel: audit: type=1106 audit(1707765773.005:136): pid=1344 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:22:53.011939 kernel: audit: type=1104 audit(1707765773.005:137): pid=1344 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:22:53.011957 kernel: audit: type=1130 audit(1707765773.006:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.84:22-10.0.0.1:53440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:53.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.84:22-10.0.0.1:53440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:53.013914 systemd-logind[1207]: Removed session 6. Feb 12 19:22:53.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.84:22-10.0.0.1:53432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:53.044000 audit[1377]: USER_ACCT pid=1377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:22:53.045422 sshd[1377]: Accepted publickey for core from 10.0.0.1 port 53440 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:22:53.046000 audit[1377]: CRED_ACQ pid=1377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:22:53.046000 audit[1377]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4aa2480 a2=3 a3=1 items=0 ppid=1 pid=1377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:53.046000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:22:53.046964 sshd[1377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:22:53.051194 systemd[1]: Started session-7.scope. Feb 12 19:22:53.051386 systemd-logind[1207]: New session 7 of user core. Feb 12 19:22:53.053000 audit[1377]: USER_START pid=1377 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:22:53.055000 audit[1382]: CRED_ACQ pid=1382 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:22:53.103000 audit[1383]: USER_ACCT pid=1383 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:22:53.103000 audit[1383]: CRED_REFR pid=1383 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:22:53.104743 sudo[1383]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:22:53.104943 sudo[1383]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:22:53.105000 audit[1383]: USER_START pid=1383 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:22:53.880701 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:22:53.887273 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:22:53.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:53.887580 systemd[1]: Reached target network-online.target. Feb 12 19:22:53.889062 systemd[1]: Starting docker.service... Feb 12 19:22:53.973985 env[1402]: time="2024-02-12T19:22:53.973905796Z" level=info msg="Starting up" Feb 12 19:22:53.975618 env[1402]: time="2024-02-12T19:22:53.975586022Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:22:53.975735 env[1402]: time="2024-02-12T19:22:53.975721652Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:22:53.975800 env[1402]: time="2024-02-12T19:22:53.975784104Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:22:53.975849 env[1402]: time="2024-02-12T19:22:53.975838064Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:22:53.980976 env[1402]: time="2024-02-12T19:22:53.980943306Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:22:53.980976 env[1402]: time="2024-02-12T19:22:53.980970854Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:22:53.981120 env[1402]: time="2024-02-12T19:22:53.980991252Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:22:53.981120 env[1402]: time="2024-02-12T19:22:53.981001288Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:22:54.213819 env[1402]: time="2024-02-12T19:22:54.213708140Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 12 19:22:54.213819 env[1402]: time="2024-02-12T19:22:54.213741597Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 12 19:22:54.214329 env[1402]: time="2024-02-12T19:22:54.214266842Z" level=info msg="Loading containers: start." Feb 12 19:22:54.255000 audit[1436]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.255000 audit[1436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffec8ecce0 a2=0 a3=1 items=0 ppid=1402 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.255000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 12 19:22:54.258000 audit[1438]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.258000 audit[1438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcac98870 a2=0 a3=1 items=0 ppid=1402 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.258000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 12 19:22:54.260000 audit[1440]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.260000 audit[1440]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcab10b80 a2=0 a3=1 items=0 ppid=1402 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.260000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 12 19:22:54.262000 audit[1442]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.262000 audit[1442]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffd6259e0 a2=0 a3=1 items=0 ppid=1402 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.262000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 12 19:22:54.264000 audit[1444]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.264000 audit[1444]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff155fbf0 a2=0 a3=1 items=0 ppid=1402 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.264000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 12 19:22:54.294000 audit[1449]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.294000 audit[1449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe6fdb9b0 a2=0 a3=1 items=0 ppid=1402 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.294000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 12 19:22:54.301000 audit[1451]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.301000 audit[1451]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc1e891f0 a2=0 a3=1 items=0 ppid=1402 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.301000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 12 19:22:54.302000 audit[1453]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.302000 audit[1453]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc6a18ac0 a2=0 a3=1 items=0 ppid=1402 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.302000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 12 19:22:54.305000 audit[1455]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.305000 audit[1455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffeee07d90 a2=0 a3=1 items=0 ppid=1402 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.305000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 12 19:22:54.311000 audit[1459]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.311000 audit[1459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffec6e03d0 a2=0 a3=1 items=0 ppid=1402 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.311000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 12 19:22:54.312000 audit[1460]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.312000 audit[1460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd42f0d70 a2=0 a3=1 items=0 ppid=1402 pid=1460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.312000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 12 19:22:54.321129 kernel: Initializing XFRM netlink socket Feb 12 19:22:54.347085 env[1402]: time="2024-02-12T19:22:54.347025860Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:22:54.365000 audit[1468]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.365000 audit[1468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffff8cb940 a2=0 a3=1 items=0 ppid=1402 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.365000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 12 19:22:54.379000 audit[1471]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.379000 audit[1471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff2d7f800 a2=0 a3=1 items=0 ppid=1402 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.379000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 12 19:22:54.382000 audit[1474]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.382000 audit[1474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff4e90e90 a2=0 a3=1 items=0 ppid=1402 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.382000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 12 19:22:54.384000 audit[1476]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.384000 audit[1476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff6800ea0 a2=0 a3=1 items=0 ppid=1402 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.384000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 12 19:22:54.385000 audit[1478]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.385000 audit[1478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffd0de2d70 a2=0 a3=1 items=0 ppid=1402 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.385000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 12 19:22:54.386000 audit[1480]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.386000 audit[1480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffff4e97e90 a2=0 a3=1 items=0 ppid=1402 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.386000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 12 19:22:54.388000 audit[1482]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.388000 audit[1482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffffd6bce40 a2=0 a3=1 items=0 ppid=1402 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.388000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 12 19:22:54.397000 audit[1485]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.397000 audit[1485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffeb111af0 a2=0 a3=1 items=0 ppid=1402 pid=1485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.397000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 12 19:22:54.398000 audit[1487]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.398000 audit[1487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffe2def7c0 a2=0 a3=1 items=0 ppid=1402 pid=1487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.398000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 12 19:22:54.400000 audit[1489]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.400000 audit[1489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe1f8ca30 a2=0 a3=1 items=0 ppid=1402 pid=1489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.400000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 12 19:22:54.402000 audit[1491]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.402000 audit[1491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc048e6d0 a2=0 a3=1 items=0 ppid=1402 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.402000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 12 19:22:54.403655 systemd-networkd[1105]: docker0: Link UP Feb 12 19:22:54.408000 audit[1495]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.408000 audit[1495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffef81e570 a2=0 a3=1 items=0 ppid=1402 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.408000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 12 19:22:54.409000 audit[1496]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:22:54.409000 audit[1496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffe518300 a2=0 a3=1 items=0 ppid=1402 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:22:54.409000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 12 19:22:54.411220 env[1402]: time="2024-02-12T19:22:54.411187614Z" level=info msg="Loading containers: done." Feb 12 19:22:54.432774 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3704925984-merged.mount: Deactivated successfully. Feb 12 19:22:54.438695 env[1402]: time="2024-02-12T19:22:54.438638223Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:22:54.438862 env[1402]: time="2024-02-12T19:22:54.438838313Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:22:54.438956 env[1402]: time="2024-02-12T19:22:54.438937020Z" level=info msg="Daemon has completed initialization" Feb 12 19:22:54.454128 systemd[1]: Started docker.service. Feb 12 19:22:54.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:54.458475 env[1402]: time="2024-02-12T19:22:54.458360318Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:22:54.475529 systemd[1]: Reloading. Feb 12 19:22:54.513608 /usr/lib/systemd/system-generators/torcx-generator[1545]: time="2024-02-12T19:22:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:22:54.513997 /usr/lib/systemd/system-generators/torcx-generator[1545]: time="2024-02-12T19:22:54Z" level=info msg="torcx already run" Feb 12 19:22:54.580751 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:22:54.580772 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:22:54.598271 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:22:54.654660 systemd[1]: Started kubelet.service. Feb 12 19:22:54.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:22:54.861920 kubelet[1587]: E0212 19:22:54.861766 1587 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:22:54.864183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:22:54.864376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:22:54.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 19:22:55.051456 env[1229]: time="2024-02-12T19:22:55.051414135Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 19:22:55.669306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486129723.mount: Deactivated successfully. Feb 12 19:22:57.159182 env[1229]: time="2024-02-12T19:22:57.159132052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:22:57.160822 env[1229]: time="2024-02-12T19:22:57.160775742Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:22:57.164175 env[1229]: time="2024-02-12T19:22:57.164144872Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:22:57.166183 env[1229]: time="2024-02-12T19:22:57.166146690Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:22:57.167197 env[1229]: time="2024-02-12T19:22:57.167164353Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 12 19:22:57.176497 env[1229]: time="2024-02-12T19:22:57.176460619Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 19:22:58.986111 env[1229]: time="2024-02-12T19:22:58.986051789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:22:58.988752 env[1229]: time="2024-02-12T19:22:58.988714685Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:22:58.990354 env[1229]: time="2024-02-12T19:22:58.990319309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:22:58.993052 env[1229]: time="2024-02-12T19:22:58.993017086Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:22:58.993501 env[1229]: time="2024-02-12T19:22:58.993469968Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 12 19:22:59.002649 env[1229]: time="2024-02-12T19:22:59.002602904Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 19:23:00.274414 env[1229]: time="2024-02-12T19:23:00.274365788Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:00.275969 env[1229]: time="2024-02-12T19:23:00.275924725Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:00.278312 env[1229]: time="2024-02-12T19:23:00.278286213Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:00.280141 env[1229]: time="2024-02-12T19:23:00.280114774Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:00.280797 env[1229]: time="2024-02-12T19:23:00.280768486Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 12 19:23:00.291046 env[1229]: time="2024-02-12T19:23:00.291009906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:23:01.305706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2953926294.mount: Deactivated successfully. Feb 12 19:23:01.648777 env[1229]: time="2024-02-12T19:23:01.648659377Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:01.650101 env[1229]: time="2024-02-12T19:23:01.650058213Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:01.651496 env[1229]: time="2024-02-12T19:23:01.651451017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:01.652761 env[1229]: time="2024-02-12T19:23:01.652729484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:01.653218 env[1229]: time="2024-02-12T19:23:01.653188805Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 12 19:23:01.662880 env[1229]: time="2024-02-12T19:23:01.662831415Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:23:02.106318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566200768.mount: Deactivated successfully. Feb 12 19:23:02.111626 env[1229]: time="2024-02-12T19:23:02.111565395Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:02.112791 env[1229]: time="2024-02-12T19:23:02.112761814Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:02.114535 env[1229]: time="2024-02-12T19:23:02.114509239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:02.116183 env[1229]: time="2024-02-12T19:23:02.116147830Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:02.116720 env[1229]: time="2024-02-12T19:23:02.116693330Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 12 19:23:02.126660 env[1229]: time="2024-02-12T19:23:02.126621088Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 19:23:02.852467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303414297.mount: Deactivated successfully. Feb 12 19:23:04.723177 env[1229]: time="2024-02-12T19:23:04.723123054Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:04.732114 env[1229]: time="2024-02-12T19:23:04.732048050Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:04.736103 env[1229]: time="2024-02-12T19:23:04.736041408Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:04.743393 env[1229]: time="2024-02-12T19:23:04.743334408Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:04.746204 env[1229]: time="2024-02-12T19:23:04.744349087Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 12 19:23:04.757542 env[1229]: time="2024-02-12T19:23:04.757491493Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 19:23:05.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:05.115291 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:23:05.115467 systemd[1]: Stopped kubelet.service. Feb 12 19:23:05.117176 systemd[1]: Started kubelet.service. Feb 12 19:23:05.118022 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 12 19:23:05.118102 kernel: audit: type=1130 audit(1707765785.114:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:05.118134 kernel: audit: type=1131 audit(1707765785.114:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:05.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:05.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:05.122475 kernel: audit: type=1130 audit(1707765785.116:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:05.167408 kubelet[1650]: E0212 19:23:05.167349 1650 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:23:05.170856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:23:05.171023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:23:05.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 19:23:05.174115 kernel: audit: type=1131 audit(1707765785.170:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 19:23:05.303810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097252424.mount: Deactivated successfully. Feb 12 19:23:05.774095 env[1229]: time="2024-02-12T19:23:05.774032203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:05.776770 env[1229]: time="2024-02-12T19:23:05.776732814Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:05.778616 env[1229]: time="2024-02-12T19:23:05.778577669Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:05.780838 env[1229]: time="2024-02-12T19:23:05.780797033Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:05.781511 env[1229]: time="2024-02-12T19:23:05.781480803Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 12 19:23:09.338166 systemd[1]: Stopped kubelet.service. Feb 12 19:23:09.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:09.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:09.342589 kernel: audit: type=1130 audit(1707765789.337:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:09.342650 kernel: audit: type=1131 audit(1707765789.337:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:09.355270 systemd[1]: Reloading. Feb 12 19:23:09.402248 /usr/lib/systemd/system-generators/torcx-generator[1748]: time="2024-02-12T19:23:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:23:09.402633 /usr/lib/systemd/system-generators/torcx-generator[1748]: time="2024-02-12T19:23:09Z" level=info msg="torcx already run" Feb 12 19:23:09.532974 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:23:09.532995 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:23:09.550566 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:23:09.615786 systemd[1]: Started kubelet.service. Feb 12 19:23:09.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:09.619113 kernel: audit: type=1130 audit(1707765789.615:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:09.680808 kubelet[1792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:23:09.680808 kubelet[1792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:23:09.681219 kubelet[1792]: I0212 19:23:09.681167 1792 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:23:09.683437 kubelet[1792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:23:09.683437 kubelet[1792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:23:10.400180 kubelet[1792]: I0212 19:23:10.400144 1792 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:23:10.400180 kubelet[1792]: I0212 19:23:10.400170 1792 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:23:10.400420 kubelet[1792]: I0212 19:23:10.400406 1792 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:23:10.404711 kubelet[1792]: I0212 19:23:10.404580 1792 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:23:10.405329 kubelet[1792]: E0212 19:23:10.405312 1792 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:10.407074 kubelet[1792]: W0212 19:23:10.407053 1792 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:23:10.408100 kubelet[1792]: I0212 19:23:10.408073 1792 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:23:10.411053 kubelet[1792]: I0212 19:23:10.411027 1792 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:23:10.411256 kubelet[1792]: I0212 19:23:10.411242 1792 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:23:10.411487 kubelet[1792]: I0212 19:23:10.411452 1792 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:23:10.411552 kubelet[1792]: I0212 19:23:10.411543 1792 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:23:10.411775 kubelet[1792]: I0212 19:23:10.411761 1792 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:23:10.416631 kubelet[1792]: I0212 19:23:10.416599 1792 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:23:10.416631 kubelet[1792]: I0212 19:23:10.416627 1792 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:23:10.416775 kubelet[1792]: W0212 19:23:10.416633 1792 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:10.416775 kubelet[1792]: E0212 19:23:10.416689 1792 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:10.416825 kubelet[1792]: I0212 19:23:10.416777 1792 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:23:10.416825 kubelet[1792]: I0212 19:23:10.416792 1792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:23:10.417307 kubelet[1792]: W0212 19:23:10.417264 1792 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:10.417349 kubelet[1792]: E0212 19:23:10.417319 1792 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:10.418093 kubelet[1792]: I0212 19:23:10.418071 1792 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:23:10.419012 kubelet[1792]: W0212 19:23:10.418980 1792 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:23:10.419530 kubelet[1792]: I0212 19:23:10.419506 1792 server.go:1186] "Started kubelet" Feb 12 19:23:10.419885 kubelet[1792]: I0212 19:23:10.419860 1792 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:23:10.420356 kubelet[1792]: E0212 19:23:10.420261 1792 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333f0318473ea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 419481578, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 419481578, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.84:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.84:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:23:10.420807 kubelet[1792]: I0212 19:23:10.420636 1792 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:23:10.420896 kubelet[1792]: E0212 19:23:10.420872 1792 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:23:10.420942 kubelet[1792]: E0212 19:23:10.420898 1792 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:23:10.420000 audit[1792]: AVC avc: denied { mac_admin } for pid=1792 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:10.420000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:10.424278 kubelet[1792]: I0212 19:23:10.421731 1792 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 12 19:23:10.424278 kubelet[1792]: I0212 19:23:10.421761 1792 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 12 19:23:10.424278 kubelet[1792]: I0212 19:23:10.421827 1792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:23:10.424278 kubelet[1792]: I0212 19:23:10.421966 1792 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:23:10.424278 kubelet[1792]: E0212 19:23:10.422714 1792 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:23:10.424278 kubelet[1792]: W0212 19:23:10.423147 1792 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:10.424278 kubelet[1792]: E0212 19:23:10.423187 1792 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:10.424278 kubelet[1792]: I0212 19:23:10.423322 1792 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:23:10.424278 kubelet[1792]: E0212 19:23:10.423869 1792 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:10.424596 kernel: audit: type=1400 audit(1707765790.420:183): avc: denied { mac_admin } for pid=1792 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:10.424646 kernel: audit: type=1401 audit(1707765790.420:183): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:10.424666 kernel: audit: type=1300 audit(1707765790.420:183): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a81500 a1=4000cc0330 a2=4000a814d0 a3=25 items=0 ppid=1 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.420000 audit[1792]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a81500 a1=4000cc0330 a2=4000a814d0 a3=25 items=0 ppid=1 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.427235 kernel: audit: type=1327 audit(1707765790.420:183): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:10.420000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:10.420000 audit[1792]: AVC avc: denied { mac_admin } for pid=1792 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:10.431539 kernel: audit: type=1400 audit(1707765790.420:184): avc: denied { mac_admin } for pid=1792 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:10.431629 kernel: audit: type=1401 audit(1707765790.420:184): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:10.420000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:10.420000 audit[1792]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40005034e0 a1=4000cc0348 a2=4000a81590 a3=25 items=0 ppid=1 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.435383 kernel: audit: type=1300 audit(1707765790.420:184): arch=c00000b7 syscall=5 success=no exit=-22 a0=40005034e0 a1=4000cc0348 a2=4000a81590 a3=25 items=0 ppid=1 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.435467 kernel: audit: type=1327 audit(1707765790.420:184): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:10.420000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:10.424000 audit[1804]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.439617 kernel: audit: type=1325 audit(1707765790.424:185): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.439709 kernel: audit: type=1300 audit(1707765790.424:185): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe1ee0ec0 a2=0 a3=1 items=0 ppid=1792 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.424000 audit[1804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe1ee0ec0 a2=0 a3=1 items=0 ppid=1792 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 19:23:10.424000 audit[1805]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1805 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.424000 audit[1805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffccebd190 a2=0 a3=1 items=0 ppid=1792 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 19:23:10.426000 audit[1807]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.426000 audit[1807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd9ccac90 a2=0 a3=1 items=0 ppid=1792 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 19:23:10.429000 audit[1809]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.429000 audit[1809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd0a6fdb0 a2=0 a3=1 items=0 ppid=1792 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.429000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 19:23:10.450000 audit[1817]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.450000 audit[1817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd4373ae0 a2=0 a3=1 items=0 ppid=1792 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 12 19:23:10.451000 audit[1818]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=1818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.451000 audit[1818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe514f2b0 a2=0 a3=1 items=0 ppid=1792 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 19:23:10.461014 kubelet[1792]: I0212 19:23:10.460981 1792 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:23:10.459000 audit[1823]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=1823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.459000 audit[1823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffdba414e0 a2=0 a3=1 items=0 ppid=1792 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 19:23:10.461348 kubelet[1792]: I0212 19:23:10.461334 1792 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:23:10.461410 kubelet[1792]: I0212 19:23:10.461402 1792 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:23:10.464291 kubelet[1792]: I0212 19:23:10.464261 1792 policy_none.go:49] "None policy: Start" Feb 12 19:23:10.467029 kubelet[1792]: I0212 19:23:10.466998 1792 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:23:10.467124 kubelet[1792]: I0212 19:23:10.467040 1792 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:23:10.467000 audit[1826]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1826 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.467000 audit[1826]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffff7b35630 a2=0 a3=1 items=0 ppid=1792 pid=1826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.467000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 19:23:10.472037 kubelet[1792]: I0212 19:23:10.471994 1792 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:23:10.470000 audit[1792]: AVC avc: denied { mac_admin } for pid=1792 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:10.470000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:10.470000 audit[1792]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000adea80 a1=400139d098 a2=4000adea50 a3=25 items=0 ppid=1 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.470000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:10.472282 kubelet[1792]: I0212 19:23:10.472140 1792 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 12 19:23:10.472369 kubelet[1792]: I0212 19:23:10.472339 1792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:23:10.472000 audit[1827]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.472000 audit[1827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd775c810 a2=0 a3=1 items=0 ppid=1792 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 19:23:10.473956 kubelet[1792]: E0212 19:23:10.473883 1792 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 19:23:10.473000 audit[1828]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1828 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.473000 audit[1828]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0d03ac0 a2=0 a3=1 items=0 ppid=1792 pid=1828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.473000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 19:23:10.475000 audit[1830]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.475000 audit[1830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffffeb3cc40 a2=0 a3=1 items=0 ppid=1792 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.475000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 19:23:10.478000 audit[1832]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=1832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.478000 audit[1832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffff0f9270 a2=0 a3=1 items=0 ppid=1792 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 19:23:10.481000 audit[1834]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=1834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.481000 audit[1834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffe74a62c0 a2=0 a3=1 items=0 ppid=1792 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 19:23:10.484000 audit[1836]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=1836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.484000 audit[1836]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffff3771ba0 a2=0 a3=1 items=0 ppid=1792 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 19:23:10.486000 audit[1838]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=1838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.486000 audit[1838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffcdc8e290 a2=0 a3=1 items=0 ppid=1792 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.486000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 19:23:10.488670 kubelet[1792]: I0212 19:23:10.488636 1792 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:23:10.488000 audit[1839]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.488000 audit[1839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc0d6bc90 a2=0 a3=1 items=0 ppid=1792 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.488000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 19:23:10.488000 audit[1840]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=1840 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.488000 audit[1840]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdaea88f0 a2=0 a3=1 items=0 ppid=1792 pid=1840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.488000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 19:23:10.489000 audit[1841]: NETFILTER_CFG table=nat:43 family=10 entries=2 op=nft_register_chain pid=1841 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.489000 audit[1841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffee485ba0 a2=0 a3=1 items=0 ppid=1792 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 19:23:10.490000 audit[1842]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=1842 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.490000 audit[1842]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0db6eb0 a2=0 a3=1 items=0 ppid=1792 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 19:23:10.491000 audit[1844]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=1844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:10.491000 audit[1844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdd8d8f60 a2=0 a3=1 items=0 ppid=1792 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 19:23:10.491000 audit[1845]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=1845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.491000 audit[1845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffda2a1750 a2=0 a3=1 items=0 ppid=1792 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.491000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 19:23:10.492000 audit[1846]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=1846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.492000 audit[1846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffff29de270 a2=0 a3=1 items=0 ppid=1792 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.492000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 19:23:10.494000 audit[1848]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=1848 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.494000 audit[1848]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffff0bb5b40 a2=0 a3=1 items=0 ppid=1792 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 19:23:10.495000 audit[1849]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=1849 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.495000 audit[1849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff4b196c0 a2=0 a3=1 items=0 ppid=1792 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.495000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 19:23:10.496000 audit[1850]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.496000 audit[1850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd43c70a0 a2=0 a3=1 items=0 ppid=1792 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 19:23:10.498000 audit[1852]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=1852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.498000 audit[1852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff0e6b370 a2=0 a3=1 items=0 ppid=1792 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 19:23:10.500000 audit[1854]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=1854 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.500000 audit[1854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffeda328b0 a2=0 a3=1 items=0 ppid=1792 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 19:23:10.502000 audit[1856]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=1856 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.502000 audit[1856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffe8beace0 a2=0 a3=1 items=0 ppid=1792 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.502000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 19:23:10.505000 audit[1858]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=1858 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.505000 audit[1858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffc34f88a0 a2=0 a3=1 items=0 ppid=1792 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 19:23:10.509000 audit[1860]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=1860 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.509000 audit[1860]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffc9e30da0 a2=0 a3=1 items=0 ppid=1792 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 19:23:10.510978 kubelet[1792]: I0212 19:23:10.510717 1792 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:23:10.510978 kubelet[1792]: I0212 19:23:10.510740 1792 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:23:10.510978 kubelet[1792]: I0212 19:23:10.510758 1792 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:23:10.510978 kubelet[1792]: E0212 19:23:10.510803 1792 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:23:10.512052 kubelet[1792]: W0212 19:23:10.511981 1792 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:10.512052 kubelet[1792]: E0212 19:23:10.512051 1792 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:10.511000 audit[1861]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=1861 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.511000 audit[1861]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffecf2a920 a2=0 a3=1 items=0 ppid=1792 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.511000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 19:23:10.512000 audit[1862]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.512000 audit[1862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde36ee00 a2=0 a3=1 items=0 ppid=1792 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.512000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 19:23:10.513000 audit[1863]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:10.513000 audit[1863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff9d0cdd0 a2=0 a3=1 items=0 ppid=1792 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:10.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 19:23:10.523809 kubelet[1792]: I0212 19:23:10.523774 1792 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:23:10.524296 kubelet[1792]: E0212 19:23:10.524277 1792 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Feb 12 19:23:10.611656 kubelet[1792]: I0212 19:23:10.611604 1792 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:10.612764 kubelet[1792]: I0212 19:23:10.612725 1792 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:10.613512 kubelet[1792]: I0212 19:23:10.613466 1792 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:10.614207 kubelet[1792]: I0212 19:23:10.614179 1792 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.84:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.84:6443: connect: connection refused" Feb 12 19:23:10.615810 kubelet[1792]: I0212 19:23:10.615786 1792 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.84:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.84:6443: connect: connection refused" Feb 12 19:23:10.617680 kubelet[1792]: I0212 19:23:10.617659 1792 status_manager.go:698] "Failed to get status for pod" podUID=10cc8b41b4dd01edfd5c730b4388ba1b pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.84:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.84:6443: connect: connection refused" Feb 12 19:23:10.624868 kubelet[1792]: E0212 19:23:10.624826 1792 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:10.724582 kubelet[1792]: I0212 19:23:10.724534 1792 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:23:10.725017 kubelet[1792]: I0212 19:23:10.724627 1792 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10cc8b41b4dd01edfd5c730b4388ba1b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"10cc8b41b4dd01edfd5c730b4388ba1b\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:23:10.725017 kubelet[1792]: I0212 19:23:10.724676 1792 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10cc8b41b4dd01edfd5c730b4388ba1b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"10cc8b41b4dd01edfd5c730b4388ba1b\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:23:10.725017 kubelet[1792]: I0212 19:23:10.724745 1792 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:10.725017 kubelet[1792]: I0212 19:23:10.724768 1792 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10cc8b41b4dd01edfd5c730b4388ba1b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"10cc8b41b4dd01edfd5c730b4388ba1b\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:23:10.725017 kubelet[1792]: I0212 19:23:10.724790 1792 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:10.725181 kubelet[1792]: I0212 19:23:10.724827 1792 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:10.725181 kubelet[1792]: I0212 19:23:10.724902 1792 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:10.725181 kubelet[1792]: I0212 19:23:10.724925 1792 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:10.725392 kubelet[1792]: I0212 19:23:10.725357 1792 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:23:10.725780 kubelet[1792]: E0212 19:23:10.725750 1792 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Feb 12 19:23:10.916848 kubelet[1792]: E0212 19:23:10.916800 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:10.917456 env[1229]: time="2024-02-12T19:23:10.917409480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 12 19:23:10.917949 kubelet[1792]: E0212 19:23:10.917918 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:10.918362 env[1229]: time="2024-02-12T19:23:10.918332184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 12 19:23:10.921184 kubelet[1792]: E0212 19:23:10.921159 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:10.921780 env[1229]: time="2024-02-12T19:23:10.921719066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:10cc8b41b4dd01edfd5c730b4388ba1b,Namespace:kube-system,Attempt:0,}" Feb 12 19:23:11.027854 kubelet[1792]: E0212 19:23:11.025929 1792 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:11.127011 kubelet[1792]: I0212 19:23:11.126980 1792 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:23:11.127326 kubelet[1792]: E0212 19:23:11.127312 1792 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Feb 12 19:23:11.306965 kubelet[1792]: W0212 19:23:11.306797 1792 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:11.306965 kubelet[1792]: E0212 19:23:11.306862 1792 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:11.406137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2334930391.mount: Deactivated successfully. Feb 12 19:23:11.410340 env[1229]: time="2024-02-12T19:23:11.410297201Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.412949 env[1229]: time="2024-02-12T19:23:11.412909809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.414945 env[1229]: time="2024-02-12T19:23:11.414913188Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.416511 env[1229]: time="2024-02-12T19:23:11.416467165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.417306 env[1229]: time="2024-02-12T19:23:11.417275839Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.422506 env[1229]: time="2024-02-12T19:23:11.422463440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.425809 env[1229]: time="2024-02-12T19:23:11.425764030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.426604 env[1229]: time="2024-02-12T19:23:11.426520189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.427992 env[1229]: time="2024-02-12T19:23:11.427934767Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.428875 env[1229]: time="2024-02-12T19:23:11.428830886Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.434708 env[1229]: time="2024-02-12T19:23:11.434669136Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.461312 env[1229]: time="2024-02-12T19:23:11.461268048Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:11.554379 kubelet[1792]: W0212 19:23:11.554287 1792 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:11.554379 kubelet[1792]: E0212 19:23:11.554361 1792 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:11.574321 env[1229]: time="2024-02-12T19:23:11.574189006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:11.574321 env[1229]: time="2024-02-12T19:23:11.574232828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:11.574473 env[1229]: time="2024-02-12T19:23:11.574243684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:11.574685 env[1229]: time="2024-02-12T19:23:11.574575958Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac0ead30a27aee8de34060d6d6721375592f93577d3b7052e411498153bd77ac pid=1875 runtime=io.containerd.runc.v2 Feb 12 19:23:11.585805 env[1229]: time="2024-02-12T19:23:11.585692659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:11.585805 env[1229]: time="2024-02-12T19:23:11.585772814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:11.585805 env[1229]: time="2024-02-12T19:23:11.585784711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:11.586167 env[1229]: time="2024-02-12T19:23:11.586053775Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/64d24035081c4f16b3073c60cece39385b3c464517ef7117c25bac3c1efdf506 pid=1884 runtime=io.containerd.runc.v2 Feb 12 19:23:11.588653 env[1229]: time="2024-02-12T19:23:11.588585547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:11.588864 env[1229]: time="2024-02-12T19:23:11.588788477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:11.588864 env[1229]: time="2024-02-12T19:23:11.588803138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:11.589131 env[1229]: time="2024-02-12T19:23:11.589079452Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdff4df766e504f73a574b444c567816a01323735e73351a2c0f5fc336426bb2 pid=1911 runtime=io.containerd.runc.v2 Feb 12 19:23:11.608918 kubelet[1792]: W0212 19:23:11.606877 1792 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:11.608918 kubelet[1792]: E0212 19:23:11.606938 1792 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:11.653080 env[1229]: time="2024-02-12T19:23:11.652349968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac0ead30a27aee8de34060d6d6721375592f93577d3b7052e411498153bd77ac\"" Feb 12 19:23:11.654532 env[1229]: time="2024-02-12T19:23:11.654478164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:10cc8b41b4dd01edfd5c730b4388ba1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdff4df766e504f73a574b444c567816a01323735e73351a2c0f5fc336426bb2\"" Feb 12 19:23:11.654843 env[1229]: time="2024-02-12T19:23:11.654740498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"64d24035081c4f16b3073c60cece39385b3c464517ef7117c25bac3c1efdf506\"" Feb 12 19:23:11.654922 kubelet[1792]: E0212 19:23:11.654898 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:11.657222 env[1229]: time="2024-02-12T19:23:11.657169484Z" level=info msg="CreateContainer within sandbox \"ac0ead30a27aee8de34060d6d6721375592f93577d3b7052e411498153bd77ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:23:11.658531 kubelet[1792]: E0212 19:23:11.658508 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:11.659467 kubelet[1792]: E0212 19:23:11.659439 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:11.660717 env[1229]: time="2024-02-12T19:23:11.660684099Z" level=info msg="CreateContainer within sandbox \"bdff4df766e504f73a574b444c567816a01323735e73351a2c0f5fc336426bb2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:23:11.662412 env[1229]: time="2024-02-12T19:23:11.662368863Z" level=info msg="CreateContainer within sandbox \"64d24035081c4f16b3073c60cece39385b3c464517ef7117c25bac3c1efdf506\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:23:11.675832 env[1229]: time="2024-02-12T19:23:11.675781921Z" level=info msg="CreateContainer within sandbox \"ac0ead30a27aee8de34060d6d6721375592f93577d3b7052e411498153bd77ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4593603159c6f39d9ec3b33aaf9b009ea5d37331267c636599bf53c7a61233f6\"" Feb 12 19:23:11.676598 env[1229]: time="2024-02-12T19:23:11.676561032Z" level=info msg="StartContainer for \"4593603159c6f39d9ec3b33aaf9b009ea5d37331267c636599bf53c7a61233f6\"" Feb 12 19:23:11.677983 env[1229]: time="2024-02-12T19:23:11.677938918Z" level=info msg="CreateContainer within sandbox \"bdff4df766e504f73a574b444c567816a01323735e73351a2c0f5fc336426bb2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ba58e43645c37943052c7276fb266b0b62af93d5ef8f06ff173b235ba7469813\"" Feb 12 19:23:11.678393 env[1229]: time="2024-02-12T19:23:11.678349745Z" level=info msg="StartContainer for \"ba58e43645c37943052c7276fb266b0b62af93d5ef8f06ff173b235ba7469813\"" Feb 12 19:23:11.682653 env[1229]: time="2024-02-12T19:23:11.682609382Z" level=info msg="CreateContainer within sandbox \"64d24035081c4f16b3073c60cece39385b3c464517ef7117c25bac3c1efdf506\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cbc88fc1c41a3d4058df1b37c5fe3cbf3fe1ae353dac396ef5800e6936e8a00c\"" Feb 12 19:23:11.683283 env[1229]: time="2024-02-12T19:23:11.683253942Z" level=info msg="StartContainer for \"cbc88fc1c41a3d4058df1b37c5fe3cbf3fe1ae353dac396ef5800e6936e8a00c\"" Feb 12 19:23:11.770402 env[1229]: time="2024-02-12T19:23:11.770353938Z" level=info msg="StartContainer for \"cbc88fc1c41a3d4058df1b37c5fe3cbf3fe1ae353dac396ef5800e6936e8a00c\" returns successfully" Feb 12 19:23:11.780055 env[1229]: time="2024-02-12T19:23:11.780010757Z" level=info msg="StartContainer for \"ba58e43645c37943052c7276fb266b0b62af93d5ef8f06ff173b235ba7469813\" returns successfully" Feb 12 19:23:11.802215 env[1229]: time="2024-02-12T19:23:11.802104961Z" level=info msg="StartContainer for \"4593603159c6f39d9ec3b33aaf9b009ea5d37331267c636599bf53c7a61233f6\" returns successfully" Feb 12 19:23:11.826620 kubelet[1792]: E0212 19:23:11.826392 1792 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.84:6443: connect: connection refused Feb 12 19:23:11.928574 kubelet[1792]: I0212 19:23:11.928533 1792 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:23:11.928875 kubelet[1792]: E0212 19:23:11.928849 1792 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Feb 12 19:23:12.517578 kubelet[1792]: E0212 19:23:12.517555 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:12.518631 kubelet[1792]: E0212 19:23:12.518613 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:12.520508 kubelet[1792]: E0212 19:23:12.520485 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:13.522882 kubelet[1792]: E0212 19:23:13.522851 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:13.523352 kubelet[1792]: E0212 19:23:13.522920 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:13.523451 kubelet[1792]: E0212 19:23:13.523256 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:13.530512 kubelet[1792]: I0212 19:23:13.530484 1792 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:23:14.159526 kubelet[1792]: E0212 19:23:14.159495 1792 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 12 19:23:14.208633 kubelet[1792]: I0212 19:23:14.208594 1792 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:23:14.419825 kubelet[1792]: I0212 19:23:14.419720 1792 apiserver.go:52] "Watching apiserver" Feb 12 19:23:14.623785 kubelet[1792]: I0212 19:23:14.623729 1792 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:23:14.647325 kubelet[1792]: I0212 19:23:14.647271 1792 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:23:14.821380 kubelet[1792]: E0212 19:23:14.821334 1792 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 12 19:23:14.821672 kubelet[1792]: E0212 19:23:14.821649 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:15.018701 kubelet[1792]: E0212 19:23:15.018657 1792 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 12 19:23:15.019170 kubelet[1792]: E0212 19:23:15.019144 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:15.019673 kubelet[1792]: E0212 19:23:15.019585 1792 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333f0318473ea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 419481578, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 419481578, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:23:15.074046 kubelet[1792]: E0212 19:23:15.073631 1792 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333f03199f342", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 420890434, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 420890434, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:23:15.127912 kubelet[1792]: E0212 19:23:15.127431 1792 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333f033e7b2ee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 459540206, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 459540206, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:23:15.180834 kubelet[1792]: E0212 19:23:15.180720 1792 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333f033e7c6f6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 459545334, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 459545334, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:23:15.222066 kubelet[1792]: E0212 19:23:15.222033 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:15.234100 kubelet[1792]: E0212 19:23:15.233993 1792 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333f033e7d82e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 459549742, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 459549742, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:23:15.306750 kubelet[1792]: E0212 19:23:15.306640 1792 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333f034b84051", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 473207889, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 473207889, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:23:15.398962 kubelet[1792]: E0212 19:23:15.398762 1792 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333f033e7b2ee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 459540206, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 523725729, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:23:15.456146 kubelet[1792]: E0212 19:23:15.456022 1792 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333f033e7c6f6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 459545334, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 523739552, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:23:15.511025 kubelet[1792]: E0212 19:23:15.510911 1792 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b333f033e7d82e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 459549742, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 10, 523743678, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:23:15.524806 kubelet[1792]: E0212 19:23:15.524760 1792 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:16.722438 systemd[1]: Reloading. Feb 12 19:23:16.763045 /usr/lib/systemd/system-generators/torcx-generator[2122]: time="2024-02-12T19:23:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:23:16.763076 /usr/lib/systemd/system-generators/torcx-generator[2122]: time="2024-02-12T19:23:16Z" level=info msg="torcx already run" Feb 12 19:23:16.835903 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:23:16.835922 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:23:16.853264 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:23:16.922568 systemd[1]: Stopping kubelet.service... Feb 12 19:23:16.941575 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:23:16.941912 systemd[1]: Stopped kubelet.service. Feb 12 19:23:16.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:16.942506 kernel: kauditd_printk_skb: 101 callbacks suppressed Feb 12 19:23:16.942546 kernel: audit: type=1131 audit(1707765796.940:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:16.943897 systemd[1]: Started kubelet.service. Feb 12 19:23:16.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:16.946256 kernel: audit: type=1130 audit(1707765796.942:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:17.018978 kubelet[2167]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:23:17.018978 kubelet[2167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:23:17.018978 kubelet[2167]: I0212 19:23:17.018945 2167 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:23:17.020362 kubelet[2167]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:23:17.020362 kubelet[2167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:23:17.023228 kubelet[2167]: I0212 19:23:17.023199 2167 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:23:17.023228 kubelet[2167]: I0212 19:23:17.023225 2167 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:23:17.023418 kubelet[2167]: I0212 19:23:17.023403 2167 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:23:17.024631 kubelet[2167]: I0212 19:23:17.024604 2167 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:23:17.025287 kubelet[2167]: I0212 19:23:17.025258 2167 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:23:17.026815 kubelet[2167]: W0212 19:23:17.026802 2167 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:23:17.027555 kubelet[2167]: I0212 19:23:17.027528 2167 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:23:17.027876 kubelet[2167]: I0212 19:23:17.027866 2167 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:23:17.027940 kubelet[2167]: I0212 19:23:17.027930 2167 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:23:17.028005 kubelet[2167]: I0212 19:23:17.027950 2167 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:23:17.028005 kubelet[2167]: I0212 19:23:17.027962 2167 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:23:17.028005 kubelet[2167]: I0212 19:23:17.027990 2167 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:23:17.030786 kubelet[2167]: I0212 19:23:17.030737 2167 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:23:17.030786 kubelet[2167]: I0212 19:23:17.030762 2167 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:23:17.030786 kubelet[2167]: I0212 19:23:17.030786 2167 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:23:17.030903 kubelet[2167]: I0212 19:23:17.030797 2167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:23:17.034323 kubelet[2167]: I0212 19:23:17.034300 2167 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:23:17.035264 kubelet[2167]: I0212 19:23:17.035237 2167 server.go:1186] "Started kubelet" Feb 12 19:23:17.035713 kubelet[2167]: I0212 19:23:17.035691 2167 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:23:17.036760 kubelet[2167]: I0212 19:23:17.036737 2167 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:23:17.037572 kubelet[2167]: E0212 19:23:17.037551 2167 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:23:17.037683 kubelet[2167]: E0212 19:23:17.037671 2167 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:23:17.037000 audit[2167]: AVC avc: denied { mac_admin } for pid=2167 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:17.038549 kubelet[2167]: I0212 19:23:17.038534 2167 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 12 19:23:17.038654 kubelet[2167]: I0212 19:23:17.038642 2167 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 12 19:23:17.038724 kubelet[2167]: I0212 19:23:17.038713 2167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:23:17.037000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:17.041207 kernel: audit: type=1400 audit(1707765797.037:221): avc: denied { mac_admin } for pid=2167 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:17.041242 kernel: audit: type=1401 audit(1707765797.037:221): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:17.041258 kernel: audit: type=1300 audit(1707765797.037:221): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c6e420 a1=4000546480 a2=4000c6e3f0 a3=25 items=0 ppid=1 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:17.037000 audit[2167]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c6e420 a1=4000546480 a2=4000c6e3f0 a3=25 items=0 ppid=1 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:17.037000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:17.046558 kernel: audit: type=1327 audit(1707765797.037:221): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:17.046591 kernel: audit: type=1400 audit(1707765797.037:222): avc: denied { mac_admin } for pid=2167 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:17.037000 audit[2167]: AVC avc: denied { mac_admin } for pid=2167 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:17.054144 kernel: audit: type=1401 audit(1707765797.037:222): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:17.054255 kernel: audit: type=1300 audit(1707765797.037:222): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000207500 a1=4000546498 a2=4000c6e4b0 a3=25 items=0 ppid=1 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:17.037000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:17.037000 audit[2167]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000207500 a1=4000546498 a2=4000c6e4b0 a3=25 items=0 ppid=1 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:17.054379 kubelet[2167]: I0212 19:23:17.050047 2167 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:23:17.054379 kubelet[2167]: I0212 19:23:17.050200 2167 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:23:17.037000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:17.056926 kernel: audit: type=1327 audit(1707765797.037:222): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:17.094126 kubelet[2167]: I0212 19:23:17.091198 2167 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:23:17.123871 kubelet[2167]: I0212 19:23:17.123840 2167 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:23:17.123871 kubelet[2167]: I0212 19:23:17.123864 2167 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:23:17.124016 kubelet[2167]: I0212 19:23:17.123999 2167 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:23:17.126592 kubelet[2167]: E0212 19:23:17.125888 2167 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:23:17.138103 kubelet[2167]: I0212 19:23:17.138065 2167 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:23:17.138217 kubelet[2167]: I0212 19:23:17.138197 2167 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:23:17.138273 kubelet[2167]: I0212 19:23:17.138221 2167 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:23:17.138398 kubelet[2167]: I0212 19:23:17.138373 2167 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:23:17.138398 kubelet[2167]: I0212 19:23:17.138398 2167 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:23:17.138446 kubelet[2167]: I0212 19:23:17.138405 2167 policy_none.go:49] "None policy: Start" Feb 12 19:23:17.138945 kubelet[2167]: I0212 19:23:17.138920 2167 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:23:17.138978 kubelet[2167]: I0212 19:23:17.138949 2167 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:23:17.139100 kubelet[2167]: I0212 19:23:17.139073 2167 state_mem.go:75] "Updated machine memory state" Feb 12 19:23:17.141055 kubelet[2167]: I0212 19:23:17.141030 2167 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:23:17.139000 audit[2167]: AVC avc: denied { mac_admin } for pid=2167 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:17.139000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:17.139000 audit[2167]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d91f50 a1=4000b25920 a2=4000d91f20 a3=25 items=0 ppid=1 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:17.139000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:17.141270 kubelet[2167]: I0212 19:23:17.141143 2167 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 12 19:23:17.141399 kubelet[2167]: I0212 19:23:17.141369 2167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:23:17.156474 kubelet[2167]: I0212 19:23:17.156441 2167 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:23:17.165148 kubelet[2167]: I0212 19:23:17.165117 2167 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 19:23:17.165305 kubelet[2167]: I0212 19:23:17.165208 2167 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:23:17.227643 kubelet[2167]: I0212 19:23:17.227604 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:17.227900 kubelet[2167]: I0212 19:23:17.227882 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:17.228136 kubelet[2167]: I0212 19:23:17.228106 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:17.238608 kubelet[2167]: E0212 19:23:17.238574 2167 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:17.351464 kubelet[2167]: I0212 19:23:17.351348 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10cc8b41b4dd01edfd5c730b4388ba1b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"10cc8b41b4dd01edfd5c730b4388ba1b\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:23:17.351464 kubelet[2167]: I0212 19:23:17.351403 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:17.351464 kubelet[2167]: I0212 19:23:17.351429 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:17.351464 kubelet[2167]: I0212 19:23:17.351450 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:17.351657 kubelet[2167]: I0212 19:23:17.351486 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:17.351657 kubelet[2167]: I0212 19:23:17.351541 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:23:17.351657 kubelet[2167]: I0212 19:23:17.351597 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10cc8b41b4dd01edfd5c730b4388ba1b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"10cc8b41b4dd01edfd5c730b4388ba1b\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:23:17.351657 kubelet[2167]: I0212 19:23:17.351626 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10cc8b41b4dd01edfd5c730b4388ba1b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"10cc8b41b4dd01edfd5c730b4388ba1b\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:23:17.351743 kubelet[2167]: I0212 19:23:17.351660 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:17.534627 kubelet[2167]: E0212 19:23:17.534575 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:17.539507 kubelet[2167]: E0212 19:23:17.539464 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:17.738556 kubelet[2167]: E0212 19:23:17.738515 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:18.033330 kubelet[2167]: I0212 19:23:18.033203 2167 apiserver.go:52] "Watching apiserver" Feb 12 19:23:18.051173 kubelet[2167]: I0212 19:23:18.051110 2167 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:23:18.055226 kubelet[2167]: I0212 19:23:18.055179 2167 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:23:18.439362 kubelet[2167]: E0212 19:23:18.439324 2167 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 19:23:18.440673 kubelet[2167]: E0212 19:23:18.439642 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:18.645114 kubelet[2167]: E0212 19:23:18.645056 2167 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 12 19:23:18.645525 kubelet[2167]: E0212 19:23:18.645499 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:18.847700 kubelet[2167]: E0212 19:23:18.847602 2167 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 19:23:18.848082 kubelet[2167]: E0212 19:23:18.848057 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:19.133906 kubelet[2167]: E0212 19:23:19.133787 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:19.133906 kubelet[2167]: E0212 19:23:19.133812 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:19.133906 kubelet[2167]: E0212 19:23:19.133837 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:19.442403 kubelet[2167]: I0212 19:23:19.442349 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.4422976800000002 pod.CreationTimestamp="2024-02-12 19:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:23:19.046963679 +0000 UTC m=+2.088014191" watchObservedRunningTime="2024-02-12 19:23:19.44229768 +0000 UTC m=+2.483348192" Feb 12 19:23:19.442646 kubelet[2167]: I0212 19:23:19.442477 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.442457359 pod.CreationTimestamp="2024-02-12 19:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:23:19.442269507 +0000 UTC m=+2.483320019" watchObservedRunningTime="2024-02-12 19:23:19.442457359 +0000 UTC m=+2.483507871" Feb 12 19:23:19.841380 kubelet[2167]: I0212 19:23:19.841264 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.84121328 pod.CreationTimestamp="2024-02-12 19:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:23:19.840998334 +0000 UTC m=+2.882048886" watchObservedRunningTime="2024-02-12 19:23:19.84121328 +0000 UTC m=+2.882263792" Feb 12 19:23:20.404973 sudo[1383]: pam_unix(sudo:session): session closed for user root Feb 12 19:23:20.403000 audit[1383]: USER_END pid=1383 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:20.404000 audit[1383]: CRED_DISP pid=1383 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:20.408343 sshd[1377]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:20.408000 audit[1377]: USER_END pid=1377 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:20.408000 audit[1377]: CRED_DISP pid=1377 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:20.413251 systemd-logind[1207]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:23:20.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.84:22-10.0.0.1:53440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:20.413433 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:53440.service: Deactivated successfully. Feb 12 19:23:20.414684 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:23:20.415287 systemd-logind[1207]: Removed session 7. Feb 12 19:23:23.564647 kubelet[2167]: E0212 19:23:23.564566 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:24.141872 kubelet[2167]: E0212 19:23:24.141836 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:25.871344 kubelet[2167]: E0212 19:23:25.871318 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:26.146691 kubelet[2167]: E0212 19:23:26.146592 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:26.519503 kubelet[2167]: E0212 19:23:26.519474 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:27.147716 kubelet[2167]: E0212 19:23:27.147666 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:27.148073 kubelet[2167]: E0212 19:23:27.147851 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:31.480189 kubelet[2167]: I0212 19:23:31.480157 2167 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:23:31.480574 env[1229]: time="2024-02-12T19:23:31.480476743Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:23:31.481400 kubelet[2167]: I0212 19:23:31.480917 2167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:23:32.133079 kubelet[2167]: I0212 19:23:32.133037 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:32.159042 kubelet[2167]: I0212 19:23:32.158998 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/53db7e49-4315-4cf8-a60e-365a74afffb0-kube-proxy\") pod \"kube-proxy-dm6r8\" (UID: \"53db7e49-4315-4cf8-a60e-365a74afffb0\") " pod="kube-system/kube-proxy-dm6r8" Feb 12 19:23:32.159451 kubelet[2167]: I0212 19:23:32.159269 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53db7e49-4315-4cf8-a60e-365a74afffb0-xtables-lock\") pod \"kube-proxy-dm6r8\" (UID: \"53db7e49-4315-4cf8-a60e-365a74afffb0\") " pod="kube-system/kube-proxy-dm6r8" Feb 12 19:23:32.159684 kubelet[2167]: I0212 19:23:32.159640 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53db7e49-4315-4cf8-a60e-365a74afffb0-lib-modules\") pod \"kube-proxy-dm6r8\" (UID: \"53db7e49-4315-4cf8-a60e-365a74afffb0\") " pod="kube-system/kube-proxy-dm6r8" Feb 12 19:23:32.159798 kubelet[2167]: I0212 19:23:32.159785 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65scc\" (UniqueName: \"kubernetes.io/projected/53db7e49-4315-4cf8-a60e-365a74afffb0-kube-api-access-65scc\") pod \"kube-proxy-dm6r8\" (UID: \"53db7e49-4315-4cf8-a60e-365a74afffb0\") " pod="kube-system/kube-proxy-dm6r8" Feb 12 19:23:32.395853 kubelet[2167]: I0212 19:23:32.395752 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:32.436513 kubelet[2167]: E0212 19:23:32.436474 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:32.437301 env[1229]: time="2024-02-12T19:23:32.437259911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dm6r8,Uid:53db7e49-4315-4cf8-a60e-365a74afffb0,Namespace:kube-system,Attempt:0,}" Feb 12 19:23:32.455225 env[1229]: time="2024-02-12T19:23:32.455150903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:32.455366 env[1229]: time="2024-02-12T19:23:32.455229161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:32.455366 env[1229]: time="2024-02-12T19:23:32.455257528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:32.455471 env[1229]: time="2024-02-12T19:23:32.455435649Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb9ce754604bfc55f06b85a5da5c19969161e4d03a37a898773c4b1d49f987d5 pid=2286 runtime=io.containerd.runc.v2 Feb 12 19:23:32.462028 kubelet[2167]: I0212 19:23:32.461987 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gr5p\" (UniqueName: \"kubernetes.io/projected/4b1fc96d-6ac9-47ec-bc58-8d8f2fc52304-kube-api-access-5gr5p\") pod \"tigera-operator-cfc98749c-p998x\" (UID: \"4b1fc96d-6ac9-47ec-bc58-8d8f2fc52304\") " pod="tigera-operator/tigera-operator-cfc98749c-p998x" Feb 12 19:23:32.462329 kubelet[2167]: I0212 19:23:32.462312 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b1fc96d-6ac9-47ec-bc58-8d8f2fc52304-var-lib-calico\") pod \"tigera-operator-cfc98749c-p998x\" (UID: \"4b1fc96d-6ac9-47ec-bc58-8d8f2fc52304\") " pod="tigera-operator/tigera-operator-cfc98749c-p998x" Feb 12 19:23:32.467267 systemd[1]: run-containerd-runc-k8s.io-bb9ce754604bfc55f06b85a5da5c19969161e4d03a37a898773c4b1d49f987d5-runc.3o8i4f.mount: Deactivated successfully. Feb 12 19:23:32.577962 env[1229]: time="2024-02-12T19:23:32.577909476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dm6r8,Uid:53db7e49-4315-4cf8-a60e-365a74afffb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb9ce754604bfc55f06b85a5da5c19969161e4d03a37a898773c4b1d49f987d5\"" Feb 12 19:23:32.578705 kubelet[2167]: E0212 19:23:32.578670 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:32.580554 env[1229]: time="2024-02-12T19:23:32.580516476Z" level=info msg="CreateContainer within sandbox \"bb9ce754604bfc55f06b85a5da5c19969161e4d03a37a898773c4b1d49f987d5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:23:32.591977 env[1229]: time="2024-02-12T19:23:32.591926498Z" level=info msg="CreateContainer within sandbox \"bb9ce754604bfc55f06b85a5da5c19969161e4d03a37a898773c4b1d49f987d5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d51b8fce0a79d729b8f36ffec4dbf108d6bf9aff022562aa0a7e822af8a5bfb8\"" Feb 12 19:23:32.592705 env[1229]: time="2024-02-12T19:23:32.592654745Z" level=info msg="StartContainer for \"d51b8fce0a79d729b8f36ffec4dbf108d6bf9aff022562aa0a7e822af8a5bfb8\"" Feb 12 19:23:32.649670 env[1229]: time="2024-02-12T19:23:32.649110160Z" level=info msg="StartContainer for \"d51b8fce0a79d729b8f36ffec4dbf108d6bf9aff022562aa0a7e822af8a5bfb8\" returns successfully" Feb 12 19:23:32.699001 env[1229]: time="2024-02-12T19:23:32.698908045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-p998x,Uid:4b1fc96d-6ac9-47ec-bc58-8d8f2fc52304,Namespace:tigera-operator,Attempt:0,}" Feb 12 19:23:32.718275 env[1229]: time="2024-02-12T19:23:32.718193277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:32.718425 env[1229]: time="2024-02-12T19:23:32.718286579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:32.718425 env[1229]: time="2024-02-12T19:23:32.718314385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:32.719097 env[1229]: time="2024-02-12T19:23:32.719030030Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b3091a5a2377bd768e45b0b8ecd73f78b71ebf0f1d43179b8b6a1d053ebcd0f pid=2360 runtime=io.containerd.runc.v2 Feb 12 19:23:32.786638 env[1229]: time="2024-02-12T19:23:32.786596118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-p998x,Uid:4b1fc96d-6ac9-47ec-bc58-8d8f2fc52304,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8b3091a5a2377bd768e45b0b8ecd73f78b71ebf0f1d43179b8b6a1d053ebcd0f\"" Feb 12 19:23:32.791508 env[1229]: time="2024-02-12T19:23:32.791298119Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 12 19:23:32.799000 audit[2418]: NETFILTER_CFG table=mangle:59 family=10 entries=1 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.802064 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 12 19:23:32.802223 kernel: audit: type=1325 audit(1707765812.799:229): table=mangle:59 family=10 entries=1 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.802263 kernel: audit: type=1300 audit(1707765812.799:229): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff82b34c0 a2=0 a3=ffffbb53c6c0 items=0 ppid=2337 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.799000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff82b34c0 a2=0 a3=ffffbb53c6c0 items=0 ppid=2337 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:23:32.806669 kernel: audit: type=1327 audit(1707765812.799:229): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:23:32.806716 kernel: audit: type=1325 audit(1707765812.799:230): table=mangle:60 family=2 entries=1 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.799000 audit[2417]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.799000 audit[2417]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9be2f70 a2=0 a3=ffffac79d6c0 items=0 ppid=2337 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.811584 kernel: audit: type=1300 audit(1707765812.799:230): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9be2f70 a2=0 a3=ffffac79d6c0 items=0 ppid=2337 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:23:32.813771 kernel: audit: type=1327 audit(1707765812.799:230): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:23:32.800000 audit[2419]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.815581 kernel: audit: type=1325 audit(1707765812.800:231): table=nat:61 family=2 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.815636 kernel: audit: type=1300 audit(1707765812.800:231): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdfc47fc0 a2=0 a3=ffffb0d616c0 items=0 ppid=2337 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.800000 audit[2419]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdfc47fc0 a2=0 a3=ffffb0d616c0 items=0 ppid=2337 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:23:32.819994 kernel: audit: type=1327 audit(1707765812.800:231): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:23:32.820040 kernel: audit: type=1325 audit(1707765812.800:232): table=nat:62 family=10 entries=1 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.800000 audit[2420]: NETFILTER_CFG table=nat:62 family=10 entries=1 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.800000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1a4fdd0 a2=0 a3=ffffaf6666c0 items=0 ppid=2337 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:23:32.800000 audit[2421]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.800000 audit[2421]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe2ec3800 a2=0 a3=ffffb98f46c0 items=0 ppid=2337 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 19:23:32.801000 audit[2422]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.801000 audit[2422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd95a4e50 a2=0 a3=ffff9f8c26c0 items=0 ppid=2337 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 19:23:32.904000 audit[2423]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.904000 audit[2423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff7862370 a2=0 a3=ffffa22bf6c0 items=0 ppid=2337 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 19:23:32.906000 audit[2425]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.906000 audit[2425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffe053850 a2=0 a3=ffffaa68c6c0 items=0 ppid=2337 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 12 19:23:32.909000 audit[2428]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.909000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc2067990 a2=0 a3=ffff8b2586c0 items=0 ppid=2337 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 12 19:23:32.910000 audit[2429]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.910000 audit[2429]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe27618c0 a2=0 a3=ffffb37d76c0 items=0 ppid=2337 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 19:23:32.912000 audit[2431]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.912000 audit[2431]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeb1f7790 a2=0 a3=ffff9db9a6c0 items=0 ppid=2337 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 19:23:32.913000 audit[2432]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.913000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff901d630 a2=0 a3=ffff9765c6c0 items=0 ppid=2337 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 19:23:32.916000 audit[2434]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.916000 audit[2434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd9351ab0 a2=0 a3=ffffa5be56c0 items=0 ppid=2337 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 19:23:32.920000 audit[2437]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.920000 audit[2437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe0212ae0 a2=0 a3=ffff9900f6c0 items=0 ppid=2337 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.920000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 12 19:23:32.921000 audit[2438]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.921000 audit[2438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcff7f680 a2=0 a3=ffffa1b066c0 items=0 ppid=2337 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.921000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 19:23:32.923000 audit[2440]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.923000 audit[2440]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffb55cf40 a2=0 a3=ffff84d3c6c0 items=0 ppid=2337 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.923000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 19:23:32.924000 audit[2441]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.924000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6095db0 a2=0 a3=ffffb579e6c0 items=0 ppid=2337 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.924000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 19:23:32.926000 audit[2443]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.926000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffec3f05f0 a2=0 a3=ffff9cfe66c0 items=0 ppid=2337 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.926000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:23:32.929000 audit[2446]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.929000 audit[2446]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe34dd230 a2=0 a3=ffff80c1f6c0 items=0 ppid=2337 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.929000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:23:32.933000 audit[2449]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.933000 audit[2449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc5bc5f60 a2=0 a3=ffffa95956c0 items=0 ppid=2337 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 19:23:32.934000 audit[2450]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.934000 audit[2450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe94e1220 a2=0 a3=ffffab3296c0 items=0 ppid=2337 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.934000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 19:23:32.936000 audit[2452]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.936000 audit[2452]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc7c4ef00 a2=0 a3=ffff99d976c0 items=0 ppid=2337 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:23:32.939000 audit[2455]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:32.939000 audit[2455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc53c0680 a2=0 a3=ffff816cf6c0 items=0 ppid=2337 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:23:32.949000 audit[2459]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:32.949000 audit[2459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=fffffb9ff630 a2=0 a3=ffff80e7a6c0 items=0 ppid=2337 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:32.954000 audit[2459]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:32.954000 audit[2459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffffb9ff630 a2=0 a3=ffff80e7a6c0 items=0 ppid=2337 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.954000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:32.955000 audit[2464]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.955000 audit[2464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd193caa0 a2=0 a3=ffffa5dfa6c0 items=0 ppid=2337 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 19:23:32.957000 audit[2466]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.957000 audit[2466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffffc6db300 a2=0 a3=ffffa4a826c0 items=0 ppid=2337 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 12 19:23:32.962000 audit[2469]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.962000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffde93ff10 a2=0 a3=ffff9d82a6c0 items=0 ppid=2337 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.962000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 12 19:23:32.963000 audit[2470]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.963000 audit[2470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec42c600 a2=0 a3=ffffb8dab6c0 items=0 ppid=2337 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.963000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 19:23:32.965000 audit[2472]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.965000 audit[2472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeb0cd8f0 a2=0 a3=ffffb1a566c0 items=0 ppid=2337 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 19:23:32.966000 audit[2473]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.966000 audit[2473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc26dbf0 a2=0 a3=ffff975a96c0 items=0 ppid=2337 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 19:23:32.968000 audit[2475]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.968000 audit[2475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff80ef480 a2=0 a3=ffff908006c0 items=0 ppid=2337 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.968000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 12 19:23:32.971000 audit[2478]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.971000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd5f37e30 a2=0 a3=ffffbe33e6c0 items=0 ppid=2337 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.971000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 19:23:32.972000 audit[2479]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.972000 audit[2479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed003ff0 a2=0 a3=ffffb07e06c0 items=0 ppid=2337 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.972000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 19:23:32.974000 audit[2481]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.974000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe6155fd0 a2=0 a3=ffffb80d76c0 items=0 ppid=2337 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 19:23:32.975000 audit[2482]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.975000 audit[2482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1695670 a2=0 a3=ffffa9f026c0 items=0 ppid=2337 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 19:23:32.978000 audit[2484]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.978000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd2793600 a2=0 a3=ffffb20656c0 items=0 ppid=2337 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.978000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:23:32.981000 audit[2487]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.981000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff15b8c70 a2=0 a3=ffff91b206c0 items=0 ppid=2337 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 19:23:32.987000 audit[2491]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.987000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe5cdf790 a2=0 a3=ffff907736c0 items=0 ppid=2337 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 12 19:23:32.988000 audit[2492]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.988000 audit[2492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffeaa831e0 a2=0 a3=ffff96d8e6c0 items=0 ppid=2337 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.988000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 19:23:32.990000 audit[2494]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.990000 audit[2494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffcb548d80 a2=0 a3=ffff851686c0 items=0 ppid=2337 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:23:32.992000 audit[2497]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:32.992000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc1023b70 a2=0 a3=ffffbdaee6c0 items=0 ppid=2337 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.992000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:23:32.997000 audit[2501]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 19:23:32.997000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffcc4f14f0 a2=0 a3=ffff9a8e26c0 items=0 ppid=2337 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.997000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:32.998000 audit[2501]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 19:23:32.998000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffcc4f14f0 a2=0 a3=ffff9a8e26c0 items=0 ppid=2337 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:32.998000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:33.164046 kubelet[2167]: E0212 19:23:33.163919 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:33.173469 kubelet[2167]: I0212 19:23:33.173406 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dm6r8" podStartSLOduration=1.173368756 pod.CreationTimestamp="2024-02-12 19:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:23:33.172018381 +0000 UTC m=+16.213068893" watchObservedRunningTime="2024-02-12 19:23:33.173368756 +0000 UTC m=+16.214419268" Feb 12 19:23:33.695117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958544359.mount: Deactivated successfully. Feb 12 19:23:33.957489 update_engine[1213]: I0212 19:23:33.957360 1213 update_attempter.cc:509] Updating boot flags... Feb 12 19:23:34.165081 kubelet[2167]: E0212 19:23:34.165048 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:34.286182 env[1229]: time="2024-02-12T19:23:34.286033956Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:34.288770 env[1229]: time="2024-02-12T19:23:34.288728916Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:34.291015 env[1229]: time="2024-02-12T19:23:34.290983824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:34.293327 env[1229]: time="2024-02-12T19:23:34.293277981Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:34.293894 env[1229]: time="2024-02-12T19:23:34.293849459Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f\"" Feb 12 19:23:34.295678 env[1229]: time="2024-02-12T19:23:34.295605864Z" level=info msg="CreateContainer within sandbox \"8b3091a5a2377bd768e45b0b8ecd73f78b71ebf0f1d43179b8b6a1d053ebcd0f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 12 19:23:34.305146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845198424.mount: Deactivated successfully. Feb 12 19:23:34.309071 env[1229]: time="2024-02-12T19:23:34.308974520Z" level=info msg="CreateContainer within sandbox \"8b3091a5a2377bd768e45b0b8ecd73f78b71ebf0f1d43179b8b6a1d053ebcd0f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cf83670ad45cdb17477fb070ad410e7b641ed540ff14fe4851145577527d0add\"" Feb 12 19:23:34.309752 env[1229]: time="2024-02-12T19:23:34.309706992Z" level=info msg="StartContainer for \"cf83670ad45cdb17477fb070ad410e7b641ed540ff14fe4851145577527d0add\"" Feb 12 19:23:34.395972 env[1229]: time="2024-02-12T19:23:34.395904530Z" level=info msg="StartContainer for \"cf83670ad45cdb17477fb070ad410e7b641ed540ff14fe4851145577527d0add\" returns successfully" Feb 12 19:23:35.180174 kubelet[2167]: I0212 19:23:35.178167 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-p998x" podStartSLOduration=-9.223372033676756e+09 pod.CreationTimestamp="2024-02-12 19:23:32 +0000 UTC" firstStartedPulling="2024-02-12 19:23:32.787799515 +0000 UTC m=+15.828850027" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:23:35.177976238 +0000 UTC m=+18.219026750" watchObservedRunningTime="2024-02-12 19:23:35.178020287 +0000 UTC m=+18.219070799" Feb 12 19:23:36.814000 audit[2576]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:36.814000 audit[2576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd955eee0 a2=0 a3=ffffad8666c0 items=0 ppid=2337 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:36.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:36.815000 audit[2576]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:36.815000 audit[2576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd955eee0 a2=0 a3=ffffad8666c0 items=0 ppid=2337 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:36.815000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:36.854000 audit[2602]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:36.854000 audit[2602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffe38797d0 a2=0 a3=ffffbaa2a6c0 items=0 ppid=2337 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:36.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:36.854000 audit[2602]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:36.854000 audit[2602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe38797d0 a2=0 a3=ffffbaa2a6c0 items=0 ppid=2337 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:36.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:36.908475 kubelet[2167]: I0212 19:23:36.908403 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:36.955531 kubelet[2167]: I0212 19:23:36.955493 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:36.996744 kubelet[2167]: I0212 19:23:36.996693 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/427700a8-036d-4130-8b84-4a6d8a9116a1-xtables-lock\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:36.996920 kubelet[2167]: I0212 19:23:36.996802 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjgh\" (UniqueName: \"kubernetes.io/projected/9dd2a319-4094-44f3-893e-26ab7f944069-kube-api-access-sjjgh\") pod \"calico-typha-65644bc5b5-4csgw\" (UID: \"9dd2a319-4094-44f3-893e-26ab7f944069\") " pod="calico-system/calico-typha-65644bc5b5-4csgw" Feb 12 19:23:36.996920 kubelet[2167]: I0212 19:23:36.996837 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/427700a8-036d-4130-8b84-4a6d8a9116a1-node-certs\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:36.996920 kubelet[2167]: I0212 19:23:36.996868 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/427700a8-036d-4130-8b84-4a6d8a9116a1-flexvol-driver-host\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:36.996920 kubelet[2167]: I0212 19:23:36.996894 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/427700a8-036d-4130-8b84-4a6d8a9116a1-policysync\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:36.997037 kubelet[2167]: I0212 19:23:36.996928 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/427700a8-036d-4130-8b84-4a6d8a9116a1-cni-log-dir\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:36.997037 kubelet[2167]: I0212 19:23:36.996959 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9dd2a319-4094-44f3-893e-26ab7f944069-typha-certs\") pod \"calico-typha-65644bc5b5-4csgw\" (UID: \"9dd2a319-4094-44f3-893e-26ab7f944069\") " pod="calico-system/calico-typha-65644bc5b5-4csgw" Feb 12 19:23:36.997037 kubelet[2167]: I0212 19:23:36.996995 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/427700a8-036d-4130-8b84-4a6d8a9116a1-tigera-ca-bundle\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:36.997130 kubelet[2167]: I0212 19:23:36.997055 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/427700a8-036d-4130-8b84-4a6d8a9116a1-var-run-calico\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:36.997130 kubelet[2167]: I0212 19:23:36.997081 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/427700a8-036d-4130-8b84-4a6d8a9116a1-var-lib-calico\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:36.997130 kubelet[2167]: I0212 19:23:36.997127 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/427700a8-036d-4130-8b84-4a6d8a9116a1-cni-bin-dir\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:36.997202 kubelet[2167]: I0212 19:23:36.997147 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/427700a8-036d-4130-8b84-4a6d8a9116a1-cni-net-dir\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:36.997202 kubelet[2167]: I0212 19:23:36.997186 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4kpv\" (UniqueName: \"kubernetes.io/projected/427700a8-036d-4130-8b84-4a6d8a9116a1-kube-api-access-p4kpv\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:36.997252 kubelet[2167]: I0212 19:23:36.997210 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9dd2a319-4094-44f3-893e-26ab7f944069-tigera-ca-bundle\") pod \"calico-typha-65644bc5b5-4csgw\" (UID: \"9dd2a319-4094-44f3-893e-26ab7f944069\") " pod="calico-system/calico-typha-65644bc5b5-4csgw" Feb 12 19:23:36.997313 kubelet[2167]: I0212 19:23:36.997296 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/427700a8-036d-4130-8b84-4a6d8a9116a1-lib-modules\") pod \"calico-node-x8nqh\" (UID: \"427700a8-036d-4130-8b84-4a6d8a9116a1\") " pod="calico-system/calico-node-x8nqh" Feb 12 19:23:37.071768 kubelet[2167]: I0212 19:23:37.071643 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:37.071914 kubelet[2167]: E0212 19:23:37.071891 2167 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mm48t" podUID=071e00bf-e137-4b01-b026-7d482f147f4e Feb 12 19:23:37.098045 kubelet[2167]: I0212 19:23:37.097996 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/071e00bf-e137-4b01-b026-7d482f147f4e-socket-dir\") pod \"csi-node-driver-mm48t\" (UID: \"071e00bf-e137-4b01-b026-7d482f147f4e\") " pod="calico-system/csi-node-driver-mm48t" Feb 12 19:23:37.098209 kubelet[2167]: I0212 19:23:37.098098 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2nc7\" (UniqueName: \"kubernetes.io/projected/071e00bf-e137-4b01-b026-7d482f147f4e-kube-api-access-j2nc7\") pod \"csi-node-driver-mm48t\" (UID: \"071e00bf-e137-4b01-b026-7d482f147f4e\") " pod="calico-system/csi-node-driver-mm48t" Feb 12 19:23:37.098209 kubelet[2167]: I0212 19:23:37.098193 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/071e00bf-e137-4b01-b026-7d482f147f4e-varrun\") pod \"csi-node-driver-mm48t\" (UID: \"071e00bf-e137-4b01-b026-7d482f147f4e\") " pod="calico-system/csi-node-driver-mm48t" Feb 12 19:23:37.098267 kubelet[2167]: I0212 19:23:37.098214 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/071e00bf-e137-4b01-b026-7d482f147f4e-kubelet-dir\") pod \"csi-node-driver-mm48t\" (UID: \"071e00bf-e137-4b01-b026-7d482f147f4e\") " pod="calico-system/csi-node-driver-mm48t" Feb 12 19:23:37.098300 kubelet[2167]: I0212 19:23:37.098271 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/071e00bf-e137-4b01-b026-7d482f147f4e-registration-dir\") pod \"csi-node-driver-mm48t\" (UID: \"071e00bf-e137-4b01-b026-7d482f147f4e\") " pod="calico-system/csi-node-driver-mm48t" Feb 12 19:23:37.106649 kubelet[2167]: E0212 19:23:37.106588 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.106649 kubelet[2167]: W0212 19:23:37.106651 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.106795 kubelet[2167]: E0212 19:23:37.106687 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.106923 kubelet[2167]: E0212 19:23:37.106897 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.106923 kubelet[2167]: W0212 19:23:37.106910 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.106923 kubelet[2167]: E0212 19:23:37.106921 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.200079 kubelet[2167]: E0212 19:23:37.200053 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.200274 kubelet[2167]: W0212 19:23:37.200256 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.200343 kubelet[2167]: E0212 19:23:37.200332 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.200634 kubelet[2167]: E0212 19:23:37.200621 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.200719 kubelet[2167]: W0212 19:23:37.200707 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.200787 kubelet[2167]: E0212 19:23:37.200778 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.201053 kubelet[2167]: E0212 19:23:37.201034 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.201053 kubelet[2167]: W0212 19:23:37.201053 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.201213 kubelet[2167]: E0212 19:23:37.201073 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.201324 kubelet[2167]: E0212 19:23:37.201293 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.201324 kubelet[2167]: W0212 19:23:37.201301 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.201324 kubelet[2167]: E0212 19:23:37.201312 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.201460 kubelet[2167]: E0212 19:23:37.201450 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.201460 kubelet[2167]: W0212 19:23:37.201459 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.201529 kubelet[2167]: E0212 19:23:37.201473 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.201757 kubelet[2167]: E0212 19:23:37.201743 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.201757 kubelet[2167]: W0212 19:23:37.201756 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.201917 kubelet[2167]: E0212 19:23:37.201773 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.202035 kubelet[2167]: E0212 19:23:37.202022 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.202035 kubelet[2167]: W0212 19:23:37.202034 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.202126 kubelet[2167]: E0212 19:23:37.202047 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.202254 kubelet[2167]: E0212 19:23:37.202238 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.202298 kubelet[2167]: W0212 19:23:37.202258 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.202298 kubelet[2167]: E0212 19:23:37.202275 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.202451 kubelet[2167]: E0212 19:23:37.202438 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.202451 kubelet[2167]: W0212 19:23:37.202450 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.202519 kubelet[2167]: E0212 19:23:37.202465 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.202669 kubelet[2167]: E0212 19:23:37.202656 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.202669 kubelet[2167]: W0212 19:23:37.202669 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.202782 kubelet[2167]: E0212 19:23:37.202685 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.202873 kubelet[2167]: E0212 19:23:37.202850 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.202873 kubelet[2167]: W0212 19:23:37.202859 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.202873 kubelet[2167]: E0212 19:23:37.202872 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.203097 kubelet[2167]: E0212 19:23:37.203074 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.203097 kubelet[2167]: W0212 19:23:37.203098 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.203224 kubelet[2167]: E0212 19:23:37.203115 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.203723 kubelet[2167]: E0212 19:23:37.203708 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.203723 kubelet[2167]: W0212 19:23:37.203723 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.203849 kubelet[2167]: E0212 19:23:37.203824 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.203921 kubelet[2167]: E0212 19:23:37.203902 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.203921 kubelet[2167]: W0212 19:23:37.203917 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.204019 kubelet[2167]: E0212 19:23:37.204007 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.204157 kubelet[2167]: E0212 19:23:37.204065 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.204242 kubelet[2167]: W0212 19:23:37.204227 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.204371 kubelet[2167]: E0212 19:23:37.204349 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.204477 kubelet[2167]: E0212 19:23:37.204466 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.204544 kubelet[2167]: W0212 19:23:37.204533 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.204638 kubelet[2167]: E0212 19:23:37.204622 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.204974 kubelet[2167]: E0212 19:23:37.204956 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.205076 kubelet[2167]: W0212 19:23:37.205062 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.205200 kubelet[2167]: E0212 19:23:37.205190 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.205583 kubelet[2167]: E0212 19:23:37.205568 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.205674 kubelet[2167]: W0212 19:23:37.205661 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.205747 kubelet[2167]: E0212 19:23:37.205737 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.205968 kubelet[2167]: E0212 19:23:37.205957 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.206045 kubelet[2167]: W0212 19:23:37.206034 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.206235 kubelet[2167]: E0212 19:23:37.206222 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.206501 kubelet[2167]: E0212 19:23:37.206487 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.206589 kubelet[2167]: W0212 19:23:37.206577 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.206653 kubelet[2167]: E0212 19:23:37.206644 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.206901 kubelet[2167]: E0212 19:23:37.206887 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.206971 kubelet[2167]: W0212 19:23:37.206960 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.207036 kubelet[2167]: E0212 19:23:37.207027 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.207248 kubelet[2167]: E0212 19:23:37.207236 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.207331 kubelet[2167]: W0212 19:23:37.207318 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.207391 kubelet[2167]: E0212 19:23:37.207382 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.207619 kubelet[2167]: E0212 19:23:37.207606 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.207695 kubelet[2167]: W0212 19:23:37.207682 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.207765 kubelet[2167]: E0212 19:23:37.207754 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.208004 kubelet[2167]: E0212 19:23:37.207992 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.208103 kubelet[2167]: W0212 19:23:37.208075 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.208185 kubelet[2167]: E0212 19:23:37.208173 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.208389 kubelet[2167]: E0212 19:23:37.208378 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.208464 kubelet[2167]: W0212 19:23:37.208453 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.208527 kubelet[2167]: E0212 19:23:37.208517 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.208768 kubelet[2167]: E0212 19:23:37.208755 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.208914 kubelet[2167]: W0212 19:23:37.208899 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.209021 kubelet[2167]: E0212 19:23:37.209000 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.209422 kubelet[2167]: E0212 19:23:37.209407 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.209517 kubelet[2167]: W0212 19:23:37.209503 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.209578 kubelet[2167]: E0212 19:23:37.209568 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.305436 kubelet[2167]: E0212 19:23:37.304208 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.305436 kubelet[2167]: W0212 19:23:37.304226 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.305436 kubelet[2167]: E0212 19:23:37.304247 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.305436 kubelet[2167]: E0212 19:23:37.304404 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.305436 kubelet[2167]: W0212 19:23:37.304417 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.305436 kubelet[2167]: E0212 19:23:37.304430 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.305436 kubelet[2167]: E0212 19:23:37.304633 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.305436 kubelet[2167]: W0212 19:23:37.304643 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.305436 kubelet[2167]: E0212 19:23:37.304659 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.325200 kubelet[2167]: E0212 19:23:37.325119 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.325200 kubelet[2167]: W0212 19:23:37.325140 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.325200 kubelet[2167]: E0212 19:23:37.325166 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.405629 kubelet[2167]: E0212 19:23:37.405595 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.405629 kubelet[2167]: W0212 19:23:37.405618 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.405629 kubelet[2167]: E0212 19:23:37.405640 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.405841 kubelet[2167]: E0212 19:23:37.405819 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.405841 kubelet[2167]: W0212 19:23:37.405835 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.405906 kubelet[2167]: E0212 19:23:37.405850 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.506668 kubelet[2167]: E0212 19:23:37.506642 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.506668 kubelet[2167]: W0212 19:23:37.506662 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.506838 kubelet[2167]: E0212 19:23:37.506683 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.506934 kubelet[2167]: E0212 19:23:37.506909 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.506934 kubelet[2167]: W0212 19:23:37.506922 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.506934 kubelet[2167]: E0212 19:23:37.506935 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.522840 kubelet[2167]: E0212 19:23:37.522817 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.522840 kubelet[2167]: W0212 19:23:37.522835 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.523004 kubelet[2167]: E0212 19:23:37.522855 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.558442 kubelet[2167]: E0212 19:23:37.558411 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:37.559177 env[1229]: time="2024-02-12T19:23:37.559128904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x8nqh,Uid:427700a8-036d-4130-8b84-4a6d8a9116a1,Namespace:calico-system,Attempt:0,}" Feb 12 19:23:37.578194 env[1229]: time="2024-02-12T19:23:37.578042535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:37.578194 env[1229]: time="2024-02-12T19:23:37.578095904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:37.578194 env[1229]: time="2024-02-12T19:23:37.578106186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:37.578819 env[1229]: time="2024-02-12T19:23:37.578740940Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ee3aa7b1cfdbff9c082a6025caf114e719d32a14c75715946c18c1626641e2a pid=2657 runtime=io.containerd.runc.v2 Feb 12 19:23:37.607642 kubelet[2167]: E0212 19:23:37.607593 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.607642 kubelet[2167]: W0212 19:23:37.607631 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.607855 kubelet[2167]: E0212 19:23:37.607656 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.629450 env[1229]: time="2024-02-12T19:23:37.629392702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x8nqh,Uid:427700a8-036d-4130-8b84-4a6d8a9116a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ee3aa7b1cfdbff9c082a6025caf114e719d32a14c75715946c18c1626641e2a\"" Feb 12 19:23:37.632116 kubelet[2167]: E0212 19:23:37.632066 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:37.634005 env[1229]: time="2024-02-12T19:23:37.633963761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 12 19:23:37.709189 kubelet[2167]: E0212 19:23:37.709157 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.709189 kubelet[2167]: W0212 19:23:37.709176 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.709189 kubelet[2167]: E0212 19:23:37.709196 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.810380 kubelet[2167]: E0212 19:23:37.810353 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.810380 kubelet[2167]: W0212 19:23:37.810373 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.810380 kubelet[2167]: E0212 19:23:37.810394 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.811366 kubelet[2167]: E0212 19:23:37.811349 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:37.811919 env[1229]: time="2024-02-12T19:23:37.811886902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65644bc5b5-4csgw,Uid:9dd2a319-4094-44f3-893e-26ab7f944069,Namespace:calico-system,Attempt:0,}" Feb 12 19:23:37.825821 env[1229]: time="2024-02-12T19:23:37.825743987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:37.825821 env[1229]: time="2024-02-12T19:23:37.825792076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:37.826047 env[1229]: time="2024-02-12T19:23:37.826013475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:37.826320 env[1229]: time="2024-02-12T19:23:37.826282764Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/768bbb14df3551ff7649db41ee3eccf269f471edbd52768474ce2511ad64ff73 pid=2699 runtime=io.containerd.runc.v2 Feb 12 19:23:37.880141 env[1229]: time="2024-02-12T19:23:37.879997114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65644bc5b5-4csgw,Uid:9dd2a319-4094-44f3-893e-26ab7f944069,Namespace:calico-system,Attempt:0,} returns sandbox id \"768bbb14df3551ff7649db41ee3eccf269f471edbd52768474ce2511ad64ff73\"" Feb 12 19:23:37.881179 kubelet[2167]: E0212 19:23:37.881156 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:37.911807 kubelet[2167]: E0212 19:23:37.911774 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.911807 kubelet[2167]: W0212 19:23:37.911795 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.911807 kubelet[2167]: E0212 19:23:37.911816 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.912000 audit[2759]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=2759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:37.915947 kernel: kauditd_printk_skb: 134 callbacks suppressed Feb 12 19:23:37.916032 kernel: audit: type=1325 audit(1707765817.912:277): table=filter:107 family=2 entries=14 op=nft_register_rule pid=2759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:37.916055 kernel: audit: type=1300 audit(1707765817.912:277): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffe3b3fa80 a2=0 a3=ffffbe0856c0 items=0 ppid=2337 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:37.912000 audit[2759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffe3b3fa80 a2=0 a3=ffffbe0856c0 items=0 ppid=2337 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:37.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:37.920345 kernel: audit: type=1327 audit(1707765817.912:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:37.920398 kernel: audit: type=1325 audit(1707765817.913:278): table=nat:108 family=2 entries=20 op=nft_register_rule pid=2759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:37.913000 audit[2759]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=2759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:37.920972 kubelet[2167]: E0212 19:23:37.920886 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:23:37.921058 kubelet[2167]: W0212 19:23:37.920971 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:23:37.921058 kubelet[2167]: E0212 19:23:37.920995 2167 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:23:37.913000 audit[2759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe3b3fa80 a2=0 a3=ffffbe0856c0 items=0 ppid=2337 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:37.924708 kernel: audit: type=1300 audit(1707765817.913:278): arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe3b3fa80 a2=0 a3=ffffbe0856c0 items=0 ppid=2337 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:37.913000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:37.926531 kernel: audit: type=1327 audit(1707765817.913:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:38.124408 kubelet[2167]: E0212 19:23:38.124372 2167 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mm48t" podUID=071e00bf-e137-4b01-b026-7d482f147f4e Feb 12 19:23:38.894376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount72976174.mount: Deactivated successfully. Feb 12 19:23:38.980629 env[1229]: time="2024-02-12T19:23:38.980577242Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:38.984610 env[1229]: time="2024-02-12T19:23:38.984574565Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:38.985978 env[1229]: time="2024-02-12T19:23:38.985950201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:38.987120 env[1229]: time="2024-02-12T19:23:38.987089876Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:38.987673 env[1229]: time="2024-02-12T19:23:38.987638369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57\"" Feb 12 19:23:38.988851 env[1229]: time="2024-02-12T19:23:38.988802848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 12 19:23:38.990000 env[1229]: time="2024-02-12T19:23:38.989879753Z" level=info msg="CreateContainer within sandbox \"1ee3aa7b1cfdbff9c082a6025caf114e719d32a14c75715946c18c1626641e2a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 12 19:23:39.008007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715046570.mount: Deactivated successfully. Feb 12 19:23:39.012018 env[1229]: time="2024-02-12T19:23:39.011971406Z" level=info msg="CreateContainer within sandbox \"1ee3aa7b1cfdbff9c082a6025caf114e719d32a14c75715946c18c1626641e2a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d82e4ff28ea0b881d1bb70c7979038b5318225d1374d79f97029c0df42ac5342\"" Feb 12 19:23:39.012534 env[1229]: time="2024-02-12T19:23:39.012506733Z" level=info msg="StartContainer for \"d82e4ff28ea0b881d1bb70c7979038b5318225d1374d79f97029c0df42ac5342\"" Feb 12 19:23:39.087924 env[1229]: time="2024-02-12T19:23:39.087817506Z" level=info msg="StartContainer for \"d82e4ff28ea0b881d1bb70c7979038b5318225d1374d79f97029c0df42ac5342\" returns successfully" Feb 12 19:23:39.114501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d82e4ff28ea0b881d1bb70c7979038b5318225d1374d79f97029c0df42ac5342-rootfs.mount: Deactivated successfully. Feb 12 19:23:39.131537 env[1229]: time="2024-02-12T19:23:39.131490315Z" level=info msg="shim disconnected" id=d82e4ff28ea0b881d1bb70c7979038b5318225d1374d79f97029c0df42ac5342 Feb 12 19:23:39.131537 env[1229]: time="2024-02-12T19:23:39.131534202Z" level=warning msg="cleaning up after shim disconnected" id=d82e4ff28ea0b881d1bb70c7979038b5318225d1374d79f97029c0df42ac5342 namespace=k8s.io Feb 12 19:23:39.131537 env[1229]: time="2024-02-12T19:23:39.131544123Z" level=info msg="cleaning up dead shim" Feb 12 19:23:39.138255 env[1229]: time="2024-02-12T19:23:39.138190128Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:23:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2817 runtime=io.containerd.runc.v2\n" Feb 12 19:23:39.177390 kubelet[2167]: E0212 19:23:39.177196 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:40.000099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946034238.mount: Deactivated successfully. Feb 12 19:23:40.125300 kubelet[2167]: E0212 19:23:40.125256 2167 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mm48t" podUID=071e00bf-e137-4b01-b026-7d482f147f4e Feb 12 19:23:40.630023 env[1229]: time="2024-02-12T19:23:40.629974007Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:40.631517 env[1229]: time="2024-02-12T19:23:40.631477321Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:40.633223 env[1229]: time="2024-02-12T19:23:40.633187148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:40.634737 env[1229]: time="2024-02-12T19:23:40.634702184Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:40.635147 env[1229]: time="2024-02-12T19:23:40.635117609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969\"" Feb 12 19:23:40.636079 env[1229]: time="2024-02-12T19:23:40.636051835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 12 19:23:40.645341 env[1229]: time="2024-02-12T19:23:40.645292155Z" level=info msg="CreateContainer within sandbox \"768bbb14df3551ff7649db41ee3eccf269f471edbd52768474ce2511ad64ff73\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 12 19:23:40.661350 env[1229]: time="2024-02-12T19:23:40.661293851Z" level=info msg="CreateContainer within sandbox \"768bbb14df3551ff7649db41ee3eccf269f471edbd52768474ce2511ad64ff73\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4498dcaa4e640ba2c16955790d42469200b78eca7c408911c184381df9a66240\"" Feb 12 19:23:40.662113 env[1229]: time="2024-02-12T19:23:40.662070972Z" level=info msg="StartContainer for \"4498dcaa4e640ba2c16955790d42469200b78eca7c408911c184381df9a66240\"" Feb 12 19:23:40.810363 env[1229]: time="2024-02-12T19:23:40.810294726Z" level=info msg="StartContainer for \"4498dcaa4e640ba2c16955790d42469200b78eca7c408911c184381df9a66240\" returns successfully" Feb 12 19:23:41.181761 kubelet[2167]: E0212 19:23:41.181376 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:41.911040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3844135090.mount: Deactivated successfully. Feb 12 19:23:42.125303 kubelet[2167]: E0212 19:23:42.125226 2167 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mm48t" podUID=071e00bf-e137-4b01-b026-7d482f147f4e Feb 12 19:23:42.183074 kubelet[2167]: I0212 19:23:42.181970 2167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:23:42.183074 kubelet[2167]: E0212 19:23:42.182686 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:43.717380 env[1229]: time="2024-02-12T19:23:43.717308228Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:43.719453 env[1229]: time="2024-02-12T19:23:43.719416356Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:43.721499 env[1229]: time="2024-02-12T19:23:43.721465036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:43.723337 env[1229]: time="2024-02-12T19:23:43.723308608Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:43.723858 env[1229]: time="2024-02-12T19:23:43.723796915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b\"" Feb 12 19:23:43.727624 env[1229]: time="2024-02-12T19:23:43.727578712Z" level=info msg="CreateContainer within sandbox \"1ee3aa7b1cfdbff9c082a6025caf114e719d32a14c75715946c18c1626641e2a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 19:23:43.751977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973780128.mount: Deactivated successfully. Feb 12 19:23:43.757385 env[1229]: time="2024-02-12T19:23:43.757324098Z" level=info msg="CreateContainer within sandbox \"1ee3aa7b1cfdbff9c082a6025caf114e719d32a14c75715946c18c1626641e2a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"58a0a67b65340c2f0759b8a9d313aa70a6710e538cbce2a03150ef1b43e37495\"" Feb 12 19:23:43.759460 env[1229]: time="2024-02-12T19:23:43.759417544Z" level=info msg="StartContainer for \"58a0a67b65340c2f0759b8a9d313aa70a6710e538cbce2a03150ef1b43e37495\"" Feb 12 19:23:43.876165 env[1229]: time="2024-02-12T19:23:43.876107375Z" level=info msg="StartContainer for \"58a0a67b65340c2f0759b8a9d313aa70a6710e538cbce2a03150ef1b43e37495\" returns successfully" Feb 12 19:23:44.124547 kubelet[2167]: E0212 19:23:44.124407 2167 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mm48t" podUID=071e00bf-e137-4b01-b026-7d482f147f4e Feb 12 19:23:44.196514 kubelet[2167]: E0212 19:23:44.196486 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:44.218327 kubelet[2167]: I0212 19:23:44.217871 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-65644bc5b5-4csgw" podStartSLOduration=-9.223372028636951e+09 pod.CreationTimestamp="2024-02-12 19:23:36 +0000 UTC" firstStartedPulling="2024-02-12 19:23:37.881894254 +0000 UTC m=+20.922944766" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:23:41.193252891 +0000 UTC m=+24.234303403" watchObservedRunningTime="2024-02-12 19:23:44.217825106 +0000 UTC m=+27.258875618" Feb 12 19:23:44.652984 env[1229]: time="2024-02-12T19:23:44.652918970Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:23:44.674623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58a0a67b65340c2f0759b8a9d313aa70a6710e538cbce2a03150ef1b43e37495-rootfs.mount: Deactivated successfully. Feb 12 19:23:44.683590 kubelet[2167]: I0212 19:23:44.683547 2167 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:23:44.714181 env[1229]: time="2024-02-12T19:23:44.714118231Z" level=info msg="shim disconnected" id=58a0a67b65340c2f0759b8a9d313aa70a6710e538cbce2a03150ef1b43e37495 Feb 12 19:23:44.714181 env[1229]: time="2024-02-12T19:23:44.714165518Z" level=warning msg="cleaning up after shim disconnected" id=58a0a67b65340c2f0759b8a9d313aa70a6710e538cbce2a03150ef1b43e37495 namespace=k8s.io Feb 12 19:23:44.714181 env[1229]: time="2024-02-12T19:23:44.714177039Z" level=info msg="cleaning up dead shim" Feb 12 19:23:44.731619 kubelet[2167]: I0212 19:23:44.731581 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:44.732006 kubelet[2167]: I0212 19:23:44.731985 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:44.734456 kubelet[2167]: I0212 19:23:44.734435 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:23:44.742510 env[1229]: time="2024-02-12T19:23:44.742461386Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:23:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2936 runtime=io.containerd.runc.v2\n" Feb 12 19:23:44.882096 kubelet[2167]: I0212 19:23:44.882049 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tch5j\" (UniqueName: \"kubernetes.io/projected/c83bff97-aaac-4eac-a511-62a8fc342a57-kube-api-access-tch5j\") pod \"coredns-787d4945fb-b8bqg\" (UID: \"c83bff97-aaac-4eac-a511-62a8fc342a57\") " pod="kube-system/coredns-787d4945fb-b8bqg" Feb 12 19:23:44.882267 kubelet[2167]: I0212 19:23:44.882170 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/031fa1cc-247a-40b4-ac9d-8e6f80abd15d-tigera-ca-bundle\") pod \"calico-kube-controllers-785d9f5779-p69mv\" (UID: \"031fa1cc-247a-40b4-ac9d-8e6f80abd15d\") " pod="calico-system/calico-kube-controllers-785d9f5779-p69mv" Feb 12 19:23:44.882267 kubelet[2167]: I0212 19:23:44.882242 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2456b22f-ee9c-4a55-886a-f4394cd661b0-config-volume\") pod \"coredns-787d4945fb-jfr6q\" (UID: \"2456b22f-ee9c-4a55-886a-f4394cd661b0\") " pod="kube-system/coredns-787d4945fb-jfr6q" Feb 12 19:23:44.882324 kubelet[2167]: I0212 19:23:44.882271 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z54mn\" (UniqueName: \"kubernetes.io/projected/031fa1cc-247a-40b4-ac9d-8e6f80abd15d-kube-api-access-z54mn\") pod \"calico-kube-controllers-785d9f5779-p69mv\" (UID: \"031fa1cc-247a-40b4-ac9d-8e6f80abd15d\") " pod="calico-system/calico-kube-controllers-785d9f5779-p69mv" Feb 12 19:23:44.882324 kubelet[2167]: I0212 19:23:44.882297 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsxsd\" (UniqueName: \"kubernetes.io/projected/2456b22f-ee9c-4a55-886a-f4394cd661b0-kube-api-access-hsxsd\") pod \"coredns-787d4945fb-jfr6q\" (UID: \"2456b22f-ee9c-4a55-886a-f4394cd661b0\") " pod="kube-system/coredns-787d4945fb-jfr6q" Feb 12 19:23:44.882377 kubelet[2167]: I0212 19:23:44.882359 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c83bff97-aaac-4eac-a511-62a8fc342a57-config-volume\") pod \"coredns-787d4945fb-b8bqg\" (UID: \"c83bff97-aaac-4eac-a511-62a8fc342a57\") " pod="kube-system/coredns-787d4945fb-b8bqg" Feb 12 19:23:45.034748 kubelet[2167]: E0212 19:23:45.034673 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:45.035510 env[1229]: time="2024-02-12T19:23:45.035338111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-b8bqg,Uid:c83bff97-aaac-4eac-a511-62a8fc342a57,Namespace:kube-system,Attempt:0,}" Feb 12 19:23:45.037054 kubelet[2167]: E0212 19:23:45.037016 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:45.037593 env[1229]: time="2024-02-12T19:23:45.037537427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jfr6q,Uid:2456b22f-ee9c-4a55-886a-f4394cd661b0,Namespace:kube-system,Attempt:0,}" Feb 12 19:23:45.039701 env[1229]: time="2024-02-12T19:23:45.039651173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-785d9f5779-p69mv,Uid:031fa1cc-247a-40b4-ac9d-8e6f80abd15d,Namespace:calico-system,Attempt:0,}" Feb 12 19:23:45.040075 kubelet[2167]: I0212 19:23:45.039915 2167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:23:45.040766 kubelet[2167]: E0212 19:23:45.040747 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:45.200354 kubelet[2167]: E0212 19:23:45.200311 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:45.200745 kubelet[2167]: E0212 19:23:45.200446 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:45.201344 env[1229]: time="2024-02-12T19:23:45.201299026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 12 19:23:45.213313 env[1229]: time="2024-02-12T19:23:45.213232207Z" level=error msg="Failed to destroy network for sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.213649 env[1229]: time="2024-02-12T19:23:45.213613775Z" level=error msg="encountered an error cleaning up failed sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.213709 env[1229]: time="2024-02-12T19:23:45.213665622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jfr6q,Uid:2456b22f-ee9c-4a55-886a-f4394cd661b0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.214226 kubelet[2167]: E0212 19:23:45.214190 2167 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.214313 kubelet[2167]: E0212 19:23:45.214268 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-jfr6q" Feb 12 19:23:45.214313 kubelet[2167]: E0212 19:23:45.214289 2167 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-jfr6q" Feb 12 19:23:45.214370 kubelet[2167]: E0212 19:23:45.214352 2167 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-jfr6q_kube-system(2456b22f-ee9c-4a55-886a-f4394cd661b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-jfr6q_kube-system(2456b22f-ee9c-4a55-886a-f4394cd661b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-jfr6q" podUID=2456b22f-ee9c-4a55-886a-f4394cd661b0 Feb 12 19:23:45.221985 env[1229]: time="2024-02-12T19:23:45.221910059Z" level=error msg="Failed to destroy network for sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.222314 env[1229]: time="2024-02-12T19:23:45.222277185Z" level=error msg="encountered an error cleaning up failed sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.222372 env[1229]: time="2024-02-12T19:23:45.222330552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-b8bqg,Uid:c83bff97-aaac-4eac-a511-62a8fc342a57,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.222589 kubelet[2167]: E0212 19:23:45.222561 2167 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.222662 kubelet[2167]: E0212 19:23:45.222617 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-b8bqg" Feb 12 19:23:45.222662 kubelet[2167]: E0212 19:23:45.222650 2167 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-b8bqg" Feb 12 19:23:45.222717 kubelet[2167]: E0212 19:23:45.222707 2167 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-b8bqg_kube-system(c83bff97-aaac-4eac-a511-62a8fc342a57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-b8bqg_kube-system(c83bff97-aaac-4eac-a511-62a8fc342a57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-b8bqg" podUID=c83bff97-aaac-4eac-a511-62a8fc342a57 Feb 12 19:23:45.227049 env[1229]: time="2024-02-12T19:23:45.226976576Z" level=error msg="Failed to destroy network for sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.227648 env[1229]: time="2024-02-12T19:23:45.227608335Z" level=error msg="encountered an error cleaning up failed sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.227784 env[1229]: time="2024-02-12T19:23:45.227756954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-785d9f5779-p69mv,Uid:031fa1cc-247a-40b4-ac9d-8e6f80abd15d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.228164 kubelet[2167]: E0212 19:23:45.228131 2167 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:45.228260 kubelet[2167]: E0212 19:23:45.228193 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-785d9f5779-p69mv" Feb 12 19:23:45.228260 kubelet[2167]: E0212 19:23:45.228219 2167 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-785d9f5779-p69mv" Feb 12 19:23:45.228323 kubelet[2167]: E0212 19:23:45.228273 2167 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-785d9f5779-p69mv_calico-system(031fa1cc-247a-40b4-ac9d-8e6f80abd15d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-785d9f5779-p69mv_calico-system(031fa1cc-247a-40b4-ac9d-8e6f80abd15d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-785d9f5779-p69mv" podUID=031fa1cc-247a-40b4-ac9d-8e6f80abd15d Feb 12 19:23:45.233000 audit[3087]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:45.233000 audit[3087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffe14fed70 a2=0 a3=ffff8ec2e6c0 items=0 ppid=2337 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:45.239504 kernel: audit: type=1325 audit(1707765825.233:279): table=filter:109 family=2 entries=13 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:45.239592 kernel: audit: type=1300 audit(1707765825.233:279): arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffe14fed70 a2=0 a3=ffff8ec2e6c0 items=0 ppid=2337 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:45.239627 kernel: audit: type=1327 audit(1707765825.233:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:45.233000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:45.244000 audit[3087]: NETFILTER_CFG table=nat:110 family=2 entries=27 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:45.244000 audit[3087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffe14fed70 a2=0 a3=ffff8ec2e6c0 items=0 ppid=2337 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:45.251008 kernel: audit: type=1325 audit(1707765825.244:280): table=nat:110 family=2 entries=27 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:45.251091 kernel: audit: type=1300 audit(1707765825.244:280): arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffe14fed70 a2=0 a3=ffff8ec2e6c0 items=0 ppid=2337 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:45.251134 kernel: audit: type=1327 audit(1707765825.244:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:45.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:45.997542 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6-shm.mount: Deactivated successfully. Feb 12 19:23:45.997695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3-shm.mount: Deactivated successfully. Feb 12 19:23:46.128224 env[1229]: time="2024-02-12T19:23:46.127981360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mm48t,Uid:071e00bf-e137-4b01-b026-7d482f147f4e,Namespace:calico-system,Attempt:0,}" Feb 12 19:23:46.179972 env[1229]: time="2024-02-12T19:23:46.179900514Z" level=error msg="Failed to destroy network for sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:46.182590 env[1229]: time="2024-02-12T19:23:46.180292401Z" level=error msg="encountered an error cleaning up failed sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:46.182590 env[1229]: time="2024-02-12T19:23:46.180338727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mm48t,Uid:071e00bf-e137-4b01-b026-7d482f147f4e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:46.181802 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446-shm.mount: Deactivated successfully. Feb 12 19:23:46.182754 kubelet[2167]: E0212 19:23:46.180601 2167 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:46.182754 kubelet[2167]: E0212 19:23:46.180656 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mm48t" Feb 12 19:23:46.182754 kubelet[2167]: E0212 19:23:46.180680 2167 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mm48t" Feb 12 19:23:46.182837 kubelet[2167]: E0212 19:23:46.180737 2167 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mm48t_calico-system(071e00bf-e137-4b01-b026-7d482f147f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mm48t_calico-system(071e00bf-e137-4b01-b026-7d482f147f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mm48t" podUID=071e00bf-e137-4b01-b026-7d482f147f4e Feb 12 19:23:46.202231 kubelet[2167]: I0212 19:23:46.202202 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:23:46.202923 env[1229]: time="2024-02-12T19:23:46.202887492Z" level=info msg="StopPodSandbox for \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\"" Feb 12 19:23:46.205720 kubelet[2167]: I0212 19:23:46.205692 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:23:46.206476 env[1229]: time="2024-02-12T19:23:46.206430920Z" level=info msg="StopPodSandbox for \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\"" Feb 12 19:23:46.206882 kubelet[2167]: I0212 19:23:46.206836 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:23:46.207348 env[1229]: time="2024-02-12T19:23:46.207309546Z" level=info msg="StopPodSandbox for \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\"" Feb 12 19:23:46.208139 kubelet[2167]: I0212 19:23:46.208115 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:23:46.208658 env[1229]: time="2024-02-12T19:23:46.208620624Z" level=info msg="StopPodSandbox for \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\"" Feb 12 19:23:46.248538 env[1229]: time="2024-02-12T19:23:46.248372828Z" level=error msg="StopPodSandbox for \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\" failed" error="failed to destroy network for sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:46.249040 env[1229]: time="2024-02-12T19:23:46.248947497Z" level=error msg="StopPodSandbox for \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\" failed" error="failed to destroy network for sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:46.257428 env[1229]: time="2024-02-12T19:23:46.257354113Z" level=error msg="StopPodSandbox for \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\" failed" error="failed to destroy network for sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:46.263550 env[1229]: time="2024-02-12T19:23:46.263434728Z" level=error msg="StopPodSandbox for \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\" failed" error="failed to destroy network for sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:23:46.265659 kubelet[2167]: E0212 19:23:46.265548 2167 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:23:46.265659 kubelet[2167]: E0212 19:23:46.265586 2167 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:23:46.265659 kubelet[2167]: E0212 19:23:46.265581 2167 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:23:46.265659 kubelet[2167]: E0212 19:23:46.265632 2167 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9} Feb 12 19:23:46.265659 kubelet[2167]: E0212 19:23:46.265637 2167 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446} Feb 12 19:23:46.265925 kubelet[2167]: E0212 19:23:46.265648 2167 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6} Feb 12 19:23:46.265925 kubelet[2167]: E0212 19:23:46.265671 2167 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"071e00bf-e137-4b01-b026-7d482f147f4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:23:46.265925 kubelet[2167]: E0212 19:23:46.265679 2167 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"031fa1cc-247a-40b4-ac9d-8e6f80abd15d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:23:46.265925 kubelet[2167]: E0212 19:23:46.265603 2167 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:23:46.266063 kubelet[2167]: E0212 19:23:46.265702 2167 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"071e00bf-e137-4b01-b026-7d482f147f4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mm48t" podUID=071e00bf-e137-4b01-b026-7d482f147f4e Feb 12 19:23:46.266063 kubelet[2167]: E0212 19:23:46.265710 2167 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c83bff97-aaac-4eac-a511-62a8fc342a57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:23:46.266063 kubelet[2167]: E0212 19:23:46.265703 2167 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3} Feb 12 19:23:46.266063 kubelet[2167]: E0212 19:23:46.265740 2167 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2456b22f-ee9c-4a55-886a-f4394cd661b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:23:46.266247 kubelet[2167]: E0212 19:23:46.265757 2167 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c83bff97-aaac-4eac-a511-62a8fc342a57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-b8bqg" podUID=c83bff97-aaac-4eac-a511-62a8fc342a57 Feb 12 19:23:46.266247 kubelet[2167]: E0212 19:23:46.265763 2167 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2456b22f-ee9c-4a55-886a-f4394cd661b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-jfr6q" podUID=2456b22f-ee9c-4a55-886a-f4394cd661b0 Feb 12 19:23:46.266247 kubelet[2167]: E0212 19:23:46.265705 2167 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"031fa1cc-247a-40b4-ac9d-8e6f80abd15d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-785d9f5779-p69mv" podUID=031fa1cc-247a-40b4-ac9d-8e6f80abd15d Feb 12 19:23:50.132630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount875877687.mount: Deactivated successfully. Feb 12 19:23:50.490588 env[1229]: time="2024-02-12T19:23:50.490535322Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.492437 env[1229]: time="2024-02-12T19:23:50.492396436Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.493674 env[1229]: time="2024-02-12T19:23:50.493645005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.495184 env[1229]: time="2024-02-12T19:23:50.495142281Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:50.495772 env[1229]: time="2024-02-12T19:23:50.495729942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774\"" Feb 12 19:23:50.508764 env[1229]: time="2024-02-12T19:23:50.508709731Z" level=info msg="CreateContainer within sandbox \"1ee3aa7b1cfdbff9c082a6025caf114e719d32a14c75715946c18c1626641e2a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 12 19:23:50.519770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724467381.mount: Deactivated successfully. Feb 12 19:23:50.520742 env[1229]: time="2024-02-12T19:23:50.520693697Z" level=info msg="CreateContainer within sandbox \"1ee3aa7b1cfdbff9c082a6025caf114e719d32a14c75715946c18c1626641e2a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"30a3db6055831aa59665c0d8d5107943412162d3bb5c79ce53f0f42e72d62ca9\"" Feb 12 19:23:50.521417 env[1229]: time="2024-02-12T19:23:50.521381568Z" level=info msg="StartContainer for \"30a3db6055831aa59665c0d8d5107943412162d3bb5c79ce53f0f42e72d62ca9\"" Feb 12 19:23:50.619360 env[1229]: time="2024-02-12T19:23:50.619310308Z" level=info msg="StartContainer for \"30a3db6055831aa59665c0d8d5107943412162d3bb5c79ce53f0f42e72d62ca9\" returns successfully" Feb 12 19:23:50.780255 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 12 19:23:50.780400 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 12 19:23:51.221063 kubelet[2167]: E0212 19:23:51.221011 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:51.236425 kubelet[2167]: I0212 19:23:51.236386 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-x8nqh" podStartSLOduration=-9.223372021618425e+09 pod.CreationTimestamp="2024-02-12 19:23:36 +0000 UTC" firstStartedPulling="2024-02-12 19:23:37.633413343 +0000 UTC m=+20.674463855" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:23:51.23482993 +0000 UTC m=+34.275880442" watchObservedRunningTime="2024-02-12 19:23:51.236349762 +0000 UTC m=+34.277400274" Feb 12 19:23:52.125000 audit[3339]: AVC avc: denied { write } for pid=3339 comm="tee" name="fd" dev="proc" ino=20773 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:23:52.138895 kernel: audit: type=1400 audit(1707765832.125:281): avc: denied { write } for pid=3339 comm="tee" name="fd" dev="proc" ino=20773 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:23:52.138999 kernel: audit: type=1300 audit(1707765832.125:281): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff749a97f a2=241 a3=1b6 items=1 ppid=3303 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.139029 kernel: audit: type=1400 audit(1707765832.130:282): avc: denied { write } for pid=3361 comm="tee" name="fd" dev="proc" ino=19061 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:23:52.139054 kernel: audit: type=1300 audit(1707765832.130:282): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe89f1980 a2=241 a3=1b6 items=1 ppid=3306 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.139100 kernel: audit: type=1307 audit(1707765832.130:282): cwd="/etc/service/enabled/node-status-reporter/log" Feb 12 19:23:52.139132 kernel: audit: type=1302 audit(1707765832.130:282): item=0 name="/dev/fd/63" inode=19058 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:23:52.125000 audit[3339]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff749a97f a2=241 a3=1b6 items=1 ppid=3303 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.130000 audit[3361]: AVC avc: denied { write } for pid=3361 comm="tee" name="fd" dev="proc" ino=19061 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:23:52.130000 audit[3361]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe89f1980 a2=241 a3=1b6 items=1 ppid=3306 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.130000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 12 19:23:52.130000 audit: PATH item=0 name="/dev/fd/63" inode=19058 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:23:52.130000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:23:52.141499 kernel: audit: type=1327 audit(1707765832.130:282): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:23:52.134000 audit[3348]: AVC avc: denied { write } for pid=3348 comm="tee" name="fd" dev="proc" ino=19065 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:23:52.147986 kernel: audit: type=1400 audit(1707765832.134:283): avc: denied { write } for pid=3348 comm="tee" name="fd" dev="proc" ino=19065 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:23:52.134000 audit[3348]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc07bb98f a2=241 a3=1b6 items=1 ppid=3298 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.134000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 12 19:23:52.162077 kernel: audit: type=1300 audit(1707765832.134:283): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc07bb98f a2=241 a3=1b6 items=1 ppid=3298 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.162157 kernel: audit: type=1307 audit(1707765832.134:283): cwd="/etc/service/enabled/felix/log" Feb 12 19:23:52.134000 audit: PATH item=0 name="/dev/fd/63" inode=19747 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:23:52.134000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:23:52.135000 audit[3352]: AVC avc: denied { write } for pid=3352 comm="tee" name="fd" dev="proc" ino=18007 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:23:52.135000 audit[3352]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffca4f6991 a2=241 a3=1b6 items=1 ppid=3301 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.135000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 12 19:23:52.135000 audit: PATH item=0 name="/dev/fd/63" inode=17998 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:23:52.135000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:23:52.125000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 12 19:23:52.125000 audit: PATH item=0 name="/dev/fd/63" inode=20770 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:23:52.125000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:23:52.145000 audit[3358]: AVC avc: denied { write } for pid=3358 comm="tee" name="fd" dev="proc" ino=19757 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:23:52.145000 audit[3358]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff54fe98f a2=241 a3=1b6 items=1 ppid=3320 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.145000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 12 19:23:52.145000 audit: PATH item=0 name="/dev/fd/63" inode=18001 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:23:52.145000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:23:52.161000 audit[3368]: AVC avc: denied { write } for pid=3368 comm="tee" name="fd" dev="proc" ino=19766 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:23:52.161000 audit[3368]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe621e98f a2=241 a3=1b6 items=1 ppid=3332 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.161000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 12 19:23:52.161000 audit: PATH item=0 name="/dev/fd/63" inode=18009 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:23:52.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:23:52.175000 audit[3376]: AVC avc: denied { write } for pid=3376 comm="tee" name="fd" dev="proc" ino=20777 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:23:52.175000 audit[3376]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe2275990 a2=241 a3=1b6 items=1 ppid=3305 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.175000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 12 19:23:52.175000 audit: PATH item=0 name="/dev/fd/63" inode=19763 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:23:52.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:23:52.222216 kubelet[2167]: I0212 19:23:52.222123 2167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:23:52.223101 kubelet[2167]: E0212 19:23:52.223056 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit: BPF prog-id=10 op=LOAD Feb 12 19:23:52.450000 audit[3446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed637c08 a2=70 a3=0 items=0 ppid=3299 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.450000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:23:52.450000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit: BPF prog-id=11 op=LOAD Feb 12 19:23:52.450000 audit[3446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed637c08 a2=70 a3=4a174c items=0 ppid=3299 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.450000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:23:52.450000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:23:52.450000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.450000 audit[3446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffed637c38 a2=70 a3=766879f items=0 ppid=3299 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.450000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:23:52.451000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.451000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.451000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.451000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.451000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.451000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.451000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.451000 audit[3446]: AVC avc: denied { perfmon } for pid=3446 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.451000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.451000 audit[3446]: AVC avc: denied { bpf } for pid=3446 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.451000 audit: BPF prog-id=12 op=LOAD Feb 12 19:23:52.451000 audit[3446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffed637b88 a2=70 a3=76687b9 items=0 ppid=3299 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.451000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:23:52.459000 audit[3448]: AVC avc: denied { bpf } for pid=3448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.459000 audit[3448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd3cac368 a2=70 a3=0 items=0 ppid=3299 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.459000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 19:23:52.459000 audit[3448]: AVC avc: denied { bpf } for pid=3448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:52.459000 audit[3448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd3cac248 a2=70 a3=2 items=0 ppid=3299 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.459000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 19:23:52.472000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:23:52.532000 audit[3474]: NETFILTER_CFG table=mangle:111 family=2 entries=19 op=nft_register_chain pid=3474 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:23:52.532000 audit[3474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6800 a0=3 a1=ffffeed12390 a2=0 a3=ffff9bf01fa8 items=0 ppid=3299 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.532000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:23:52.540000 audit[3476]: NETFILTER_CFG table=nat:112 family=2 entries=16 op=nft_register_chain pid=3476 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:23:52.540000 audit[3476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5188 a0=3 a1=ffffcb342210 a2=0 a3=ffffb405dfa8 items=0 ppid=3299 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.540000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:23:52.540000 audit[3477]: NETFILTER_CFG table=filter:113 family=2 entries=39 op=nft_register_chain pid=3477 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:23:52.540000 audit[3477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18472 a0=3 a1=ffffef359390 a2=0 a3=ffffb033ffa8 items=0 ppid=3299 pid=3477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.540000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:23:52.541000 audit[3475]: NETFILTER_CFG table=raw:114 family=2 entries=19 op=nft_register_chain pid=3475 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:23:52.541000 audit[3475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6132 a0=3 a1=ffffd7b5efb0 a2=0 a3=ffffa9e8efa8 items=0 ppid=3299 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:52.541000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:23:53.351137 systemd-networkd[1105]: vxlan.calico: Link UP Feb 12 19:23:53.351143 systemd-networkd[1105]: vxlan.calico: Gained carrier Feb 12 19:23:54.887246 systemd-networkd[1105]: vxlan.calico: Gained IPv6LL Feb 12 19:23:57.125505 env[1229]: time="2024-02-12T19:23:57.125462056Z" level=info msg="StopPodSandbox for \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\"" Feb 12 19:23:57.125869 env[1229]: time="2024-02-12T19:23:57.125467816Z" level=info msg="StopPodSandbox for \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\"" Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.250 [INFO][3523] k8s.go 578: Cleaning up netns ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.252 [INFO][3523] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" iface="eth0" netns="/var/run/netns/cni-8acc95c0-4c6b-23c7-feb9-ab00e5c6ece2" Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.252 [INFO][3523] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" iface="eth0" netns="/var/run/netns/cni-8acc95c0-4c6b-23c7-feb9-ab00e5c6ece2" Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.253 [INFO][3523] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" iface="eth0" netns="/var/run/netns/cni-8acc95c0-4c6b-23c7-feb9-ab00e5c6ece2" Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.253 [INFO][3523] k8s.go 585: Releasing IP address(es) ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.253 [INFO][3523] utils.go 188: Calico CNI releasing IP address ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.366 [INFO][3538] ipam_plugin.go 415: Releasing address using handleID ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" HandleID="k8s-pod-network.88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.366 [INFO][3538] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.367 [INFO][3538] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.378 [WARNING][3538] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" HandleID="k8s-pod-network.88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.378 [INFO][3538] ipam_plugin.go 443: Releasing address using workloadID ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" HandleID="k8s-pod-network.88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.379 [INFO][3538] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:23:57.383169 env[1229]: 2024-02-12 19:23:57.381 [INFO][3523] k8s.go 591: Teardown processing complete. ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:23:57.386064 env[1229]: time="2024-02-12T19:23:57.383256598Z" level=info msg="TearDown network for sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\" successfully" Feb 12 19:23:57.386064 env[1229]: time="2024-02-12T19:23:57.383290161Z" level=info msg="StopPodSandbox for \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\" returns successfully" Feb 12 19:23:57.385076 systemd[1]: run-netns-cni\x2d8acc95c0\x2d4c6b\x2d23c7\x2dfeb9\x2dab00e5c6ece2.mount: Deactivated successfully. Feb 12 19:23:57.386995 kubelet[2167]: E0212 19:23:57.386954 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:57.388147 env[1229]: time="2024-02-12T19:23:57.388039316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jfr6q,Uid:2456b22f-ee9c-4a55-886a-f4394cd661b0,Namespace:kube-system,Attempt:1,}" Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.260 [INFO][3524] k8s.go 578: Cleaning up netns ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.260 [INFO][3524] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" iface="eth0" netns="/var/run/netns/cni-0618efb1-3ea9-8b0c-bb39-f87f70dd4306" Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.260 [INFO][3524] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" iface="eth0" netns="/var/run/netns/cni-0618efb1-3ea9-8b0c-bb39-f87f70dd4306" Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.263 [INFO][3524] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" iface="eth0" netns="/var/run/netns/cni-0618efb1-3ea9-8b0c-bb39-f87f70dd4306" Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.263 [INFO][3524] k8s.go 585: Releasing IP address(es) ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.263 [INFO][3524] utils.go 188: Calico CNI releasing IP address ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.366 [INFO][3540] ipam_plugin.go 415: Releasing address using handleID ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" HandleID="k8s-pod-network.870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.367 [INFO][3540] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.379 [INFO][3540] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.394 [WARNING][3540] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" HandleID="k8s-pod-network.870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.395 [INFO][3540] ipam_plugin.go 443: Releasing address using workloadID ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" HandleID="k8s-pod-network.870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.397 [INFO][3540] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:23:57.401590 env[1229]: 2024-02-12 19:23:57.399 [INFO][3524] k8s.go 591: Teardown processing complete. ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:23:57.404806 systemd[1]: run-netns-cni\x2d0618efb1\x2d3ea9\x2d8b0c\x2dbb39\x2df87f70dd4306.mount: Deactivated successfully. Feb 12 19:23:57.405329 env[1229]: time="2024-02-12T19:23:57.405175940Z" level=info msg="TearDown network for sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\" successfully" Feb 12 19:23:57.405329 env[1229]: time="2024-02-12T19:23:57.405321832Z" level=info msg="StopPodSandbox for \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\" returns successfully" Feb 12 19:23:57.405972 env[1229]: time="2024-02-12T19:23:57.405936763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-785d9f5779-p69mv,Uid:031fa1cc-247a-40b4-ac9d-8e6f80abd15d,Namespace:calico-system,Attempt:1,}" Feb 12 19:23:57.551043 systemd-networkd[1105]: calic7894daac8f: Link UP Feb 12 19:23:57.560379 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:23:57.560496 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic7894daac8f: link becomes ready Feb 12 19:23:57.560627 systemd-networkd[1105]: calic7894daac8f: Gained carrier Feb 12 19:23:57.568138 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2d0e9045a8f: link becomes ready Feb 12 19:23:57.568319 systemd-networkd[1105]: cali2d0e9045a8f: Link UP Feb 12 19:23:57.568500 systemd-networkd[1105]: cali2d0e9045a8f: Gained carrier Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.457 [INFO][3554] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--jfr6q-eth0 coredns-787d4945fb- kube-system 2456b22f-ee9c-4a55-886a-f4394cd661b0 702 0 2024-02-12 19:23:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-jfr6q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d0e9045a8f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" Namespace="kube-system" Pod="coredns-787d4945fb-jfr6q" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--jfr6q-" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.457 [INFO][3554] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" Namespace="kube-system" Pod="coredns-787d4945fb-jfr6q" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.486 [INFO][3580] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" HandleID="k8s-pod-network.ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.509 [INFO][3580] ipam_plugin.go 268: Auto assigning IP ContainerID="ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" HandleID="k8s-pod-network.ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029da10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-jfr6q", "timestamp":"2024-02-12 19:23:57.48664391 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.509 [INFO][3580] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.531 [INFO][3580] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.531 [INFO][3580] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.533 [INFO][3580] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" host="localhost" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.541 [INFO][3580] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.544 [INFO][3580] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.547 [INFO][3580] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.550 [INFO][3580] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.550 [INFO][3580] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" host="localhost" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.552 [INFO][3580] ipam.go 1682: Creating new handle: k8s-pod-network.ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730 Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.556 [INFO][3580] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" host="localhost" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.561 [INFO][3580] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" host="localhost" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.561 [INFO][3580] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" host="localhost" Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.561 [INFO][3580] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:23:57.584586 env[1229]: 2024-02-12 19:23:57.561 [INFO][3580] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" HandleID="k8s-pod-network.ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:23:57.585160 env[1229]: 2024-02-12 19:23:57.563 [INFO][3554] k8s.go 385: Populated endpoint ContainerID="ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" Namespace="kube-system" Pod="coredns-787d4945fb-jfr6q" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--jfr6q-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2456b22f-ee9c-4a55-886a-f4394cd661b0", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-jfr6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d0e9045a8f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:23:57.585160 env[1229]: 2024-02-12 19:23:57.563 [INFO][3554] k8s.go 386: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" Namespace="kube-system" Pod="coredns-787d4945fb-jfr6q" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:23:57.585160 env[1229]: 2024-02-12 19:23:57.563 [INFO][3554] dataplane_linux.go 68: Setting the host side veth name to cali2d0e9045a8f ContainerID="ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" Namespace="kube-system" Pod="coredns-787d4945fb-jfr6q" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:23:57.585160 env[1229]: 2024-02-12 19:23:57.568 [INFO][3554] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" Namespace="kube-system" Pod="coredns-787d4945fb-jfr6q" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:23:57.585160 env[1229]: 2024-02-12 19:23:57.569 [INFO][3554] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" Namespace="kube-system" Pod="coredns-787d4945fb-jfr6q" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--jfr6q-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2456b22f-ee9c-4a55-886a-f4394cd661b0", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730", Pod:"coredns-787d4945fb-jfr6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d0e9045a8f", MAC:"ca:45:59:9b:38:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:23:57.585160 env[1229]: 2024-02-12 19:23:57.582 [INFO][3554] k8s.go 491: Wrote updated endpoint to datastore ContainerID="ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730" Namespace="kube-system" Pod="coredns-787d4945fb-jfr6q" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.460 [INFO][3559] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0 calico-kube-controllers-785d9f5779- calico-system 031fa1cc-247a-40b4-ac9d-8e6f80abd15d 703 0 2024-02-12 19:23:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:785d9f5779 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-785d9f5779-p69mv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic7894daac8f [] []}} ContainerID="fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" Namespace="calico-system" Pod="calico-kube-controllers-785d9f5779-p69mv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.460 [INFO][3559] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" Namespace="calico-system" Pod="calico-kube-controllers-785d9f5779-p69mv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.486 [INFO][3581] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" HandleID="k8s-pod-network.fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.501 [INFO][3581] ipam_plugin.go 268: Auto assigning IP ContainerID="fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" HandleID="k8s-pod-network.fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000cda70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-785d9f5779-p69mv", "timestamp":"2024-02-12 19:23:57.48664423 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.501 [INFO][3581] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.501 [INFO][3581] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.501 [INFO][3581] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.503 [INFO][3581] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" host="localhost" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.509 [INFO][3581] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.513 [INFO][3581] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.515 [INFO][3581] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.517 [INFO][3581] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.517 [INFO][3581] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" host="localhost" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.519 [INFO][3581] ipam.go 1682: Creating new handle: k8s-pod-network.fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.522 [INFO][3581] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" host="localhost" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.529 [INFO][3581] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" host="localhost" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.529 [INFO][3581] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" host="localhost" Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.529 [INFO][3581] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:23:57.585847 env[1229]: 2024-02-12 19:23:57.529 [INFO][3581] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" HandleID="k8s-pod-network.fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:23:57.586386 env[1229]: 2024-02-12 19:23:57.532 [INFO][3559] k8s.go 385: Populated endpoint ContainerID="fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" Namespace="calico-system" Pod="calico-kube-controllers-785d9f5779-p69mv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0", GenerateName:"calico-kube-controllers-785d9f5779-", Namespace:"calico-system", SelfLink:"", UID:"031fa1cc-247a-40b4-ac9d-8e6f80abd15d", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"785d9f5779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-785d9f5779-p69mv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7894daac8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:23:57.586386 env[1229]: 2024-02-12 19:23:57.532 [INFO][3559] k8s.go 386: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" Namespace="calico-system" Pod="calico-kube-controllers-785d9f5779-p69mv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:23:57.586386 env[1229]: 2024-02-12 19:23:57.533 [INFO][3559] dataplane_linux.go 68: Setting the host side veth name to calic7894daac8f ContainerID="fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" Namespace="calico-system" Pod="calico-kube-controllers-785d9f5779-p69mv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:23:57.586386 env[1229]: 2024-02-12 19:23:57.560 [INFO][3559] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" Namespace="calico-system" Pod="calico-kube-controllers-785d9f5779-p69mv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:23:57.586386 env[1229]: 2024-02-12 19:23:57.561 [INFO][3559] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" Namespace="calico-system" Pod="calico-kube-controllers-785d9f5779-p69mv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0", GenerateName:"calico-kube-controllers-785d9f5779-", Namespace:"calico-system", SelfLink:"", UID:"031fa1cc-247a-40b4-ac9d-8e6f80abd15d", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"785d9f5779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f", Pod:"calico-kube-controllers-785d9f5779-p69mv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7894daac8f", MAC:"4e:63:7f:f8:01:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:23:57.586386 env[1229]: 2024-02-12 19:23:57.580 [INFO][3559] k8s.go 491: Wrote updated endpoint to datastore ContainerID="fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f" Namespace="calico-system" Pod="calico-kube-controllers-785d9f5779-p69mv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:23:57.598440 env[1229]: time="2024-02-12T19:23:57.598359833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:57.598563 env[1229]: time="2024-02-12T19:23:57.598439960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:57.598563 env[1229]: time="2024-02-12T19:23:57.598468042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:57.598877 env[1229]: time="2024-02-12T19:23:57.598737144Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730 pid=3639 runtime=io.containerd.runc.v2 Feb 12 19:23:57.606166 env[1229]: time="2024-02-12T19:23:57.605864177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:23:57.606166 env[1229]: time="2024-02-12T19:23:57.605915181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:23:57.606166 env[1229]: time="2024-02-12T19:23:57.605926422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:23:57.606377 env[1229]: time="2024-02-12T19:23:57.606178603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f pid=3660 runtime=io.containerd.runc.v2 Feb 12 19:23:57.616000 audit[3683]: NETFILTER_CFG table=filter:115 family=2 entries=68 op=nft_register_chain pid=3683 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:23:57.621672 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 12 19:23:57.621759 kernel: audit: type=1325 audit(1707765837.616:301): table=filter:115 family=2 entries=68 op=nft_register_chain pid=3683 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:23:57.621793 kernel: audit: type=1300 audit(1707765837.616:301): arch=c00000b7 syscall=211 success=yes exit=38072 a0=3 a1=ffffd7a98640 a2=0 a3=ffff99208fa8 items=0 ppid=3299 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:57.616000 audit[3683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=38072 a0=3 a1=ffffd7a98640 a2=0 a3=ffff99208fa8 items=0 ppid=3299 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:57.616000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:23:57.641223 kernel: audit: type=1327 audit(1707765837.616:301): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:23:57.652030 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:23:57.675380 env[1229]: time="2024-02-12T19:23:57.675321388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jfr6q,Uid:2456b22f-ee9c-4a55-886a-f4394cd661b0,Namespace:kube-system,Attempt:1,} returns sandbox id \"ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730\"" Feb 12 19:23:57.675991 kubelet[2167]: E0212 19:23:57.675969 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:57.679426 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:23:57.679735 env[1229]: time="2024-02-12T19:23:57.679689631Z" level=info msg="CreateContainer within sandbox \"ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:23:57.697963 env[1229]: time="2024-02-12T19:23:57.697903705Z" level=info msg="CreateContainer within sandbox \"ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8bb0a0a21afab09f07745e44b0014fc6d5fca7f2c8feec5d8736d340af06f3e\"" Feb 12 19:23:57.698699 env[1229]: time="2024-02-12T19:23:57.698637726Z" level=info msg="StartContainer for \"d8bb0a0a21afab09f07745e44b0014fc6d5fca7f2c8feec5d8736d340af06f3e\"" Feb 12 19:23:57.699452 env[1229]: time="2024-02-12T19:23:57.699416511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-785d9f5779-p69mv,Uid:031fa1cc-247a-40b4-ac9d-8e6f80abd15d,Namespace:calico-system,Attempt:1,} returns sandbox id \"fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f\"" Feb 12 19:23:57.701103 env[1229]: time="2024-02-12T19:23:57.701056047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 12 19:23:57.751333 env[1229]: time="2024-02-12T19:23:57.751288621Z" level=info msg="StartContainer for \"d8bb0a0a21afab09f07745e44b0014fc6d5fca7f2c8feec5d8736d340af06f3e\" returns successfully" Feb 12 19:23:58.234124 kubelet[2167]: E0212 19:23:58.233839 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:58.248341 kubelet[2167]: I0212 19:23:58.247990 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-jfr6q" podStartSLOduration=26.24795645 pod.CreationTimestamp="2024-02-12 19:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:23:58.247529616 +0000 UTC m=+41.288580128" watchObservedRunningTime="2024-02-12 19:23:58.24795645 +0000 UTC m=+41.289006962" Feb 12 19:23:58.327000 audit[3783]: NETFILTER_CFG table=filter:116 family=2 entries=12 op=nft_register_rule pid=3783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:58.327000 audit[3783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=fffff9954ae0 a2=0 a3=ffff82d9e6c0 items=0 ppid=2337 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:58.333046 kernel: audit: type=1325 audit(1707765838.327:302): table=filter:116 family=2 entries=12 op=nft_register_rule pid=3783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:58.333113 kernel: audit: type=1300 audit(1707765838.327:302): arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=fffff9954ae0 a2=0 a3=ffff82d9e6c0 items=0 ppid=2337 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:58.333139 kernel: audit: type=1327 audit(1707765838.327:302): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:58.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:58.328000 audit[3783]: NETFILTER_CFG table=nat:117 family=2 entries=30 op=nft_register_rule pid=3783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:58.328000 audit[3783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=fffff9954ae0 a2=0 a3=ffff82d9e6c0 items=0 ppid=2337 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:58.344658 kernel: audit: type=1325 audit(1707765838.328:303): table=nat:117 family=2 entries=30 op=nft_register_rule pid=3783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:58.344747 kernel: audit: type=1300 audit(1707765838.328:303): arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=fffff9954ae0 a2=0 a3=ffff82d9e6c0 items=0 ppid=2337 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:58.344777 kernel: audit: type=1327 audit(1707765838.328:303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:58.328000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:58.422000 audit[3809]: NETFILTER_CFG table=filter:118 family=2 entries=9 op=nft_register_rule pid=3809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:58.422000 audit[3809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff1a0e460 a2=0 a3=ffff999dd6c0 items=0 ppid=2337 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:58.422000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:58.426103 kernel: audit: type=1325 audit(1707765838.422:304): table=filter:118 family=2 entries=9 op=nft_register_rule pid=3809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:58.426000 audit[3809]: NETFILTER_CFG table=nat:119 family=2 entries=51 op=nft_register_chain pid=3809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:23:58.426000 audit[3809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=fffff1a0e460 a2=0 a3=ffff999dd6c0 items=0 ppid=2337 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:58.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:23:58.840280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1952209950.mount: Deactivated successfully. Feb 12 19:23:59.238884 kubelet[2167]: E0212 19:23:59.238844 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:23:59.303287 systemd-networkd[1105]: cali2d0e9045a8f: Gained IPv6LL Feb 12 19:23:59.358897 env[1229]: time="2024-02-12T19:23:59.358841663Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:59.361001 env[1229]: time="2024-02-12T19:23:59.360968031Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:59.363100 env[1229]: time="2024-02-12T19:23:59.363047234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:59.364560 env[1229]: time="2024-02-12T19:23:59.364418582Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:23:59.365340 env[1229]: time="2024-02-12T19:23:59.365227566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8\"" Feb 12 19:23:59.378466 env[1229]: time="2024-02-12T19:23:59.378402082Z" level=info msg="CreateContainer within sandbox \"fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 12 19:23:59.391253 env[1229]: time="2024-02-12T19:23:59.391199289Z" level=info msg="CreateContainer within sandbox \"fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"03f87f17a8f929a95db4bbafda7ee1e95083e33eef1c836218fa2873141f1ec7\"" Feb 12 19:23:59.391705 env[1229]: time="2024-02-12T19:23:59.391682047Z" level=info msg="StartContainer for \"03f87f17a8f929a95db4bbafda7ee1e95083e33eef1c836218fa2873141f1ec7\"" Feb 12 19:23:59.487343 env[1229]: time="2024-02-12T19:23:59.487296970Z" level=info msg="StartContainer for \"03f87f17a8f929a95db4bbafda7ee1e95083e33eef1c836218fa2873141f1ec7\" returns successfully" Feb 12 19:23:59.495299 systemd-networkd[1105]: calic7894daac8f: Gained IPv6LL Feb 12 19:23:59.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.84:22-10.0.0.1:41414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:59.810143 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:41414.service. Feb 12 19:23:59.873000 audit[3851]: USER_ACCT pid=3851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:59.873536 sshd[3851]: Accepted publickey for core from 10.0.0.1 port 41414 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:23:59.877000 audit[3851]: CRED_ACQ pid=3851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:59.877000 audit[3851]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc43f3ab0 a2=3 a3=1 items=0 ppid=1 pid=3851 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:59.877000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:23:59.877758 sshd[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:59.883711 systemd-logind[1207]: New session 8 of user core. Feb 12 19:23:59.884585 systemd[1]: Started session-8.scope. Feb 12 19:23:59.890000 audit[3851]: USER_START pid=3851 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:59.891000 audit[3854]: CRED_ACQ pid=3854 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:00.059302 sshd[3851]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:00.059000 audit[3851]: USER_END pid=3851 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:00.059000 audit[3851]: CRED_DISP pid=3851 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:00.062189 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:41414.service: Deactivated successfully. Feb 12 19:24:00.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.84:22-10.0.0.1:41414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:00.063829 systemd-logind[1207]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:24:00.063888 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:24:00.064577 systemd-logind[1207]: Removed session 8. Feb 12 19:24:00.125945 env[1229]: time="2024-02-12T19:24:00.125751950Z" level=info msg="StopPodSandbox for \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\"" Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.171 [INFO][3884] k8s.go 578: Cleaning up netns ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.171 [INFO][3884] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" iface="eth0" netns="/var/run/netns/cni-fa5ac272-0a71-545f-2223-fe87b90109a0" Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.172 [INFO][3884] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" iface="eth0" netns="/var/run/netns/cni-fa5ac272-0a71-545f-2223-fe87b90109a0" Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.172 [INFO][3884] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" iface="eth0" netns="/var/run/netns/cni-fa5ac272-0a71-545f-2223-fe87b90109a0" Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.172 [INFO][3884] k8s.go 585: Releasing IP address(es) ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.172 [INFO][3884] utils.go 188: Calico CNI releasing IP address ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.192 [INFO][3892] ipam_plugin.go 415: Releasing address using handleID ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" HandleID="k8s-pod-network.2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.192 [INFO][3892] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.192 [INFO][3892] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.202 [WARNING][3892] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" HandleID="k8s-pod-network.2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.202 [INFO][3892] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" HandleID="k8s-pod-network.2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.204 [INFO][3892] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:00.210953 env[1229]: 2024-02-12 19:24:00.208 [INFO][3884] k8s.go 591: Teardown processing complete. ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:00.213642 env[1229]: time="2024-02-12T19:24:00.213269941Z" level=info msg="TearDown network for sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\" successfully" Feb 12 19:24:00.213642 env[1229]: time="2024-02-12T19:24:00.213305303Z" level=info msg="StopPodSandbox for \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\" returns successfully" Feb 12 19:24:00.214524 kubelet[2167]: E0212 19:24:00.214347 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:00.214886 env[1229]: time="2024-02-12T19:24:00.214845581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-b8bqg,Uid:c83bff97-aaac-4eac-a511-62a8fc342a57,Namespace:kube-system,Attempt:1,}" Feb 12 19:24:00.221947 systemd[1]: run-netns-cni\x2dfa5ac272\x2d0a71\x2d545f\x2d2223\x2dfe87b90109a0.mount: Deactivated successfully. Feb 12 19:24:00.245122 kubelet[2167]: E0212 19:24:00.244195 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:00.264795 kubelet[2167]: I0212 19:24:00.259414 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-785d9f5779-p69mv" podStartSLOduration=-9.223372013595413e+09 pod.CreationTimestamp="2024-02-12 19:23:37 +0000 UTC" firstStartedPulling="2024-02-12 19:23:57.700640172 +0000 UTC m=+40.741690684" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:00.259016888 +0000 UTC m=+43.300067440" watchObservedRunningTime="2024-02-12 19:24:00.259362435 +0000 UTC m=+43.300412947" Feb 12 19:24:00.353410 systemd-networkd[1105]: cali8e90f10ed16: Link UP Feb 12 19:24:00.355986 systemd-networkd[1105]: cali8e90f10ed16: Gained carrier Feb 12 19:24:00.356172 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:24:00.356210 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8e90f10ed16: link becomes ready Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.270 [INFO][3899] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--b8bqg-eth0 coredns-787d4945fb- kube-system c83bff97-aaac-4eac-a511-62a8fc342a57 760 0 2024-02-12 19:23:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-b8bqg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8e90f10ed16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" Namespace="kube-system" Pod="coredns-787d4945fb-b8bqg" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b8bqg-" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.270 [INFO][3899] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" Namespace="kube-system" Pod="coredns-787d4945fb-b8bqg" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.301 [INFO][3913] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" HandleID="k8s-pod-network.43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.317 [INFO][3913] ipam_plugin.go 268: Auto assigning IP ContainerID="43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" HandleID="k8s-pod-network.43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000243a80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-b8bqg", "timestamp":"2024-02-12 19:24:00.301191282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.317 [INFO][3913] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.317 [INFO][3913] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.318 [INFO][3913] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.320 [INFO][3913] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" host="localhost" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.325 [INFO][3913] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.330 [INFO][3913] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.333 [INFO][3913] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.336 [INFO][3913] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.336 [INFO][3913] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" host="localhost" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.338 [INFO][3913] ipam.go 1682: Creating new handle: k8s-pod-network.43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52 Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.343 [INFO][3913] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" host="localhost" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.349 [INFO][3913] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" host="localhost" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.349 [INFO][3913] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" host="localhost" Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.349 [INFO][3913] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:00.368727 env[1229]: 2024-02-12 19:24:00.349 [INFO][3913] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" HandleID="k8s-pod-network.43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:00.371767 env[1229]: 2024-02-12 19:24:00.350 [INFO][3899] k8s.go 385: Populated endpoint ContainerID="43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" Namespace="kube-system" Pod="coredns-787d4945fb-b8bqg" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--b8bqg-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c83bff97-aaac-4eac-a511-62a8fc342a57", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-b8bqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e90f10ed16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:00.371767 env[1229]: 2024-02-12 19:24:00.351 [INFO][3899] k8s.go 386: Calico CNI using IPs: [192.168.88.131/32] ContainerID="43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" Namespace="kube-system" Pod="coredns-787d4945fb-b8bqg" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:00.371767 env[1229]: 2024-02-12 19:24:00.351 [INFO][3899] dataplane_linux.go 68: Setting the host side veth name to cali8e90f10ed16 ContainerID="43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" Namespace="kube-system" Pod="coredns-787d4945fb-b8bqg" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:00.371767 env[1229]: 2024-02-12 19:24:00.356 [INFO][3899] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" Namespace="kube-system" Pod="coredns-787d4945fb-b8bqg" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:00.371767 env[1229]: 2024-02-12 19:24:00.356 [INFO][3899] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" Namespace="kube-system" Pod="coredns-787d4945fb-b8bqg" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--b8bqg-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c83bff97-aaac-4eac-a511-62a8fc342a57", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52", Pod:"coredns-787d4945fb-b8bqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e90f10ed16", MAC:"1e:bc:11:a2:93:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:00.371767 env[1229]: 2024-02-12 19:24:00.365 [INFO][3899] k8s.go 491: Wrote updated endpoint to datastore ContainerID="43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52" Namespace="kube-system" Pod="coredns-787d4945fb-b8bqg" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:00.386000 audit[3934]: NETFILTER_CFG table=filter:120 family=2 entries=34 op=nft_register_chain pid=3934 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:00.386000 audit[3934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=17900 a0=3 a1=fffff08492d0 a2=0 a3=ffffaaae0fa8 items=0 ppid=3299 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:00.386000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:00.393324 env[1229]: time="2024-02-12T19:24:00.393001401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:00.393324 env[1229]: time="2024-02-12T19:24:00.393042244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:00.393324 env[1229]: time="2024-02-12T19:24:00.393061726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:00.393742 env[1229]: time="2024-02-12T19:24:00.393639970Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52 pid=3942 runtime=io.containerd.runc.v2 Feb 12 19:24:00.408358 systemd[1]: run-containerd-runc-k8s.io-43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52-runc.WSq8Wn.mount: Deactivated successfully. Feb 12 19:24:00.456012 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:24:00.478542 env[1229]: time="2024-02-12T19:24:00.478486876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-b8bqg,Uid:c83bff97-aaac-4eac-a511-62a8fc342a57,Namespace:kube-system,Attempt:1,} returns sandbox id \"43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52\"" Feb 12 19:24:00.479284 kubelet[2167]: E0212 19:24:00.479266 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:00.481283 env[1229]: time="2024-02-12T19:24:00.481244367Z" level=info msg="CreateContainer within sandbox \"43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:24:00.497523 env[1229]: time="2024-02-12T19:24:00.497446889Z" level=info msg="CreateContainer within sandbox \"43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a7f20416a1fdd81b4a9e76365e1a00f6c979846afb95cffc09c671dc28d5328a\"" Feb 12 19:24:00.499587 env[1229]: time="2024-02-12T19:24:00.499200304Z" level=info msg="StartContainer for \"a7f20416a1fdd81b4a9e76365e1a00f6c979846afb95cffc09c671dc28d5328a\"" Feb 12 19:24:00.556149 env[1229]: time="2024-02-12T19:24:00.556102827Z" level=info msg="StartContainer for \"a7f20416a1fdd81b4a9e76365e1a00f6c979846afb95cffc09c671dc28d5328a\" returns successfully" Feb 12 19:24:01.126202 env[1229]: time="2024-02-12T19:24:01.126106014Z" level=info msg="StopPodSandbox for \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\"" Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.179 [INFO][4032] k8s.go 578: Cleaning up netns ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.179 [INFO][4032] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" iface="eth0" netns="/var/run/netns/cni-28e40554-326c-e0d8-3e85-80b3e098a128" Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.179 [INFO][4032] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" iface="eth0" netns="/var/run/netns/cni-28e40554-326c-e0d8-3e85-80b3e098a128" Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.179 [INFO][4032] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" iface="eth0" netns="/var/run/netns/cni-28e40554-326c-e0d8-3e85-80b3e098a128" Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.179 [INFO][4032] k8s.go 585: Releasing IP address(es) ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.179 [INFO][4032] utils.go 188: Calico CNI releasing IP address ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.204 [INFO][4039] ipam_plugin.go 415: Releasing address using handleID ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" HandleID="k8s-pod-network.8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.206 [INFO][4039] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.206 [INFO][4039] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.220 [WARNING][4039] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" HandleID="k8s-pod-network.8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.220 [INFO][4039] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" HandleID="k8s-pod-network.8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.222 [INFO][4039] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:01.225886 env[1229]: 2024-02-12 19:24:01.224 [INFO][4032] k8s.go 591: Teardown processing complete. ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:01.226381 env[1229]: time="2024-02-12T19:24:01.226091692Z" level=info msg="TearDown network for sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\" successfully" Feb 12 19:24:01.226381 env[1229]: time="2024-02-12T19:24:01.226124574Z" level=info msg="StopPodSandbox for \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\" returns successfully" Feb 12 19:24:01.226855 env[1229]: time="2024-02-12T19:24:01.226805345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mm48t,Uid:071e00bf-e137-4b01-b026-7d482f147f4e,Namespace:calico-system,Attempt:1,}" Feb 12 19:24:01.248868 kubelet[2167]: I0212 19:24:01.247803 2167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:24:01.249243 kubelet[2167]: E0212 19:24:01.249071 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:01.271155 kubelet[2167]: I0212 19:24:01.269393 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-b8bqg" podStartSLOduration=29.269355608 pod.CreationTimestamp="2024-02-12 19:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:01.268820008 +0000 UTC m=+44.309870520" watchObservedRunningTime="2024-02-12 19:24:01.269355608 +0000 UTC m=+44.310406120" Feb 12 19:24:01.341000 audit[4086]: NETFILTER_CFG table=filter:121 family=2 entries=6 op=nft_register_rule pid=4086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:01.341000 audit[4086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffffd123200 a2=0 a3=ffffb20786c0 items=0 ppid=2337 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:01.341000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:01.344000 audit[4086]: NETFILTER_CFG table=nat:122 family=2 entries=60 op=nft_register_rule pid=4086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:01.344000 audit[4086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=fffffd123200 a2=0 a3=ffffb20786c0 items=0 ppid=2337 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:01.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:01.388412 systemd[1]: run-netns-cni\x2d28e40554\x2d326c\x2de0d8\x2d3e85\x2d80b3e098a128.mount: Deactivated successfully. Feb 12 19:24:01.397000 audit[4119]: NETFILTER_CFG table=filter:123 family=2 entries=6 op=nft_register_rule pid=4119 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:01.397000 audit[4119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffffca08940 a2=0 a3=ffff887d06c0 items=0 ppid=2337 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:01.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:01.413000 audit[4119]: NETFILTER_CFG table=nat:124 family=2 entries=72 op=nft_register_chain pid=4119 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:01.413000 audit[4119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffffca08940 a2=0 a3=ffff887d06c0 items=0 ppid=2337 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:01.413000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:01.434122 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:24:01.434236 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali502e9cad793: link becomes ready Feb 12 19:24:01.432075 systemd-networkd[1105]: cali502e9cad793: Link UP Feb 12 19:24:01.434030 systemd-networkd[1105]: cali502e9cad793: Gained carrier Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.342 [INFO][4060] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mm48t-eth0 csi-node-driver- calico-system 071e00bf-e137-4b01-b026-7d482f147f4e 779 0 2024-02-12 19:23:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-mm48t eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali502e9cad793 [] []}} ContainerID="0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" Namespace="calico-system" Pod="csi-node-driver-mm48t" WorkloadEndpoint="localhost-k8s-csi--node--driver--mm48t-" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.342 [INFO][4060] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" Namespace="calico-system" Pod="csi-node-driver-mm48t" WorkloadEndpoint="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.378 [INFO][4090] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" HandleID="k8s-pod-network.0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.396 [INFO][4090] ipam_plugin.go 268: Auto assigning IP ContainerID="0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" HandleID="k8s-pod-network.0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400025f2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mm48t", "timestamp":"2024-02-12 19:24:01.378326558 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.396 [INFO][4090] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.396 [INFO][4090] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.396 [INFO][4090] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.398 [INFO][4090] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" host="localhost" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.405 [INFO][4090] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.409 [INFO][4090] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.411 [INFO][4090] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.414 [INFO][4090] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.414 [INFO][4090] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" host="localhost" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.417 [INFO][4090] ipam.go 1682: Creating new handle: k8s-pod-network.0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90 Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.421 [INFO][4090] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" host="localhost" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.426 [INFO][4090] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" host="localhost" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.426 [INFO][4090] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" host="localhost" Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.427 [INFO][4090] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:01.449124 env[1229]: 2024-02-12 19:24:01.427 [INFO][4090] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" HandleID="k8s-pod-network.0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:01.449962 env[1229]: 2024-02-12 19:24:01.428 [INFO][4060] k8s.go 385: Populated endpoint ContainerID="0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" Namespace="calico-system" Pod="csi-node-driver-mm48t" WorkloadEndpoint="localhost-k8s-csi--node--driver--mm48t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mm48t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"071e00bf-e137-4b01-b026-7d482f147f4e", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mm48t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali502e9cad793", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:01.449962 env[1229]: 2024-02-12 19:24:01.428 [INFO][4060] k8s.go 386: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" Namespace="calico-system" Pod="csi-node-driver-mm48t" WorkloadEndpoint="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:01.449962 env[1229]: 2024-02-12 19:24:01.428 [INFO][4060] dataplane_linux.go 68: Setting the host side veth name to cali502e9cad793 ContainerID="0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" Namespace="calico-system" Pod="csi-node-driver-mm48t" WorkloadEndpoint="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:01.449962 env[1229]: 2024-02-12 19:24:01.434 [INFO][4060] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" Namespace="calico-system" Pod="csi-node-driver-mm48t" WorkloadEndpoint="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:01.449962 env[1229]: 2024-02-12 19:24:01.434 [INFO][4060] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" Namespace="calico-system" Pod="csi-node-driver-mm48t" WorkloadEndpoint="localhost-k8s-csi--node--driver--mm48t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mm48t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"071e00bf-e137-4b01-b026-7d482f147f4e", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90", Pod:"csi-node-driver-mm48t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali502e9cad793", MAC:"e6:1a:8f:4a:52:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:01.449962 env[1229]: 2024-02-12 19:24:01.442 [INFO][4060] k8s.go 491: Wrote updated endpoint to datastore ContainerID="0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90" Namespace="calico-system" Pod="csi-node-driver-mm48t" WorkloadEndpoint="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:01.467000 audit[4135]: NETFILTER_CFG table=filter:125 family=2 entries=42 op=nft_register_chain pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:01.467000 audit[4135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20696 a0=3 a1=ffffe0a8ab30 a2=0 a3=ffff99ffffa8 items=0 ppid=3299 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:01.467000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:01.472492 env[1229]: time="2024-02-12T19:24:01.472428436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:01.472652 env[1229]: time="2024-02-12T19:24:01.472469679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:01.472652 env[1229]: time="2024-02-12T19:24:01.472480600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:01.472652 env[1229]: time="2024-02-12T19:24:01.472620690Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90 pid=4143 runtime=io.containerd.runc.v2 Feb 12 19:24:01.525350 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:24:01.540429 env[1229]: time="2024-02-12T19:24:01.540384199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mm48t,Uid:071e00bf-e137-4b01-b026-7d482f147f4e,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90\"" Feb 12 19:24:01.543729 env[1229]: time="2024-02-12T19:24:01.543677365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 12 19:24:01.863318 systemd-networkd[1105]: cali8e90f10ed16: Gained IPv6LL Feb 12 19:24:02.253872 kubelet[2167]: E0212 19:24:02.253838 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:02.626164 env[1229]: time="2024-02-12T19:24:02.625965250Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:02.628051 env[1229]: time="2024-02-12T19:24:02.628005479Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:02.629264 env[1229]: time="2024-02-12T19:24:02.629234089Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:02.630585 env[1229]: time="2024-02-12T19:24:02.630559426Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:02.630930 env[1229]: time="2024-02-12T19:24:02.630897930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8\"" Feb 12 19:24:02.637144 env[1229]: time="2024-02-12T19:24:02.637080102Z" level=info msg="CreateContainer within sandbox \"0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 12 19:24:02.656208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1672520691.mount: Deactivated successfully. Feb 12 19:24:02.682348 env[1229]: time="2024-02-12T19:24:02.682295244Z" level=info msg="CreateContainer within sandbox \"0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ae7d788e5da9208595c84dd587b7a5d3a4f71d3e363cebcdcdd2d4517a840c89\"" Feb 12 19:24:02.682844 env[1229]: time="2024-02-12T19:24:02.682818082Z" level=info msg="StartContainer for \"ae7d788e5da9208595c84dd587b7a5d3a4f71d3e363cebcdcdd2d4517a840c89\"" Feb 12 19:24:02.754456 env[1229]: time="2024-02-12T19:24:02.754407670Z" level=info msg="StartContainer for \"ae7d788e5da9208595c84dd587b7a5d3a4f71d3e363cebcdcdd2d4517a840c89\" returns successfully" Feb 12 19:24:02.756481 env[1229]: time="2024-02-12T19:24:02.756447619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 12 19:24:03.080488 systemd-networkd[1105]: cali502e9cad793: Gained IPv6LL Feb 12 19:24:03.257008 kubelet[2167]: E0212 19:24:03.256966 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:04.085544 env[1229]: time="2024-02-12T19:24:04.085501808Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:04.088602 env[1229]: time="2024-02-12T19:24:04.088560942Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:04.092261 env[1229]: time="2024-02-12T19:24:04.092216917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:04.094568 env[1229]: time="2024-02-12T19:24:04.094531719Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:04.094979 env[1229]: time="2024-02-12T19:24:04.094940988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43\"" Feb 12 19:24:04.099756 env[1229]: time="2024-02-12T19:24:04.099694159Z" level=info msg="CreateContainer within sandbox \"0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 12 19:24:04.112635 env[1229]: time="2024-02-12T19:24:04.112574819Z" level=info msg="CreateContainer within sandbox \"0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"49453d07d15fdce1058ed8dabdbb985aa341c85f45dc90c2e5b66982e5232f72\"" Feb 12 19:24:04.113383 env[1229]: time="2024-02-12T19:24:04.113352833Z" level=info msg="StartContainer for \"49453d07d15fdce1058ed8dabdbb985aa341c85f45dc90c2e5b66982e5232f72\"" Feb 12 19:24:04.141014 systemd[1]: run-containerd-runc-k8s.io-49453d07d15fdce1058ed8dabdbb985aa341c85f45dc90c2e5b66982e5232f72-runc.p7EXew.mount: Deactivated successfully. Feb 12 19:24:04.226823 env[1229]: time="2024-02-12T19:24:04.226760512Z" level=info msg="StartContainer for \"49453d07d15fdce1058ed8dabdbb985aa341c85f45dc90c2e5b66982e5232f72\" returns successfully" Feb 12 19:24:04.281474 kubelet[2167]: I0212 19:24:04.281420 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-mm48t" podStartSLOduration=-9.223372009573395e+09 pod.CreationTimestamp="2024-02-12 19:23:37 +0000 UTC" firstStartedPulling="2024-02-12 19:24:01.541549246 +0000 UTC m=+44.582599718" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:04.281344124 +0000 UTC m=+47.322394636" watchObservedRunningTime="2024-02-12 19:24:04.281381687 +0000 UTC m=+47.322432199" Feb 12 19:24:04.808545 kubelet[2167]: I0212 19:24:04.808310 2167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:24:04.809358 kubelet[2167]: E0212 19:24:04.809335 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:05.063796 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:58098.service. Feb 12 19:24:05.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.84:22-10.0.0.1:58098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:05.064816 kernel: kauditd_printk_skb: 34 callbacks suppressed Feb 12 19:24:05.064919 kernel: audit: type=1130 audit(1707765845.062:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.84:22-10.0.0.1:58098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:05.109000 audit[4305]: USER_ACCT pid=4305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.111280 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 58098 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:05.111000 audit[4305]: CRED_ACQ pid=4305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.113902 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:05.115997 kernel: audit: type=1101 audit(1707765845.109:322): pid=4305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.116067 kernel: audit: type=1103 audit(1707765845.111:323): pid=4305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.116109 kernel: audit: type=1006 audit(1707765845.111:324): pid=4305 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 12 19:24:05.117951 kernel: audit: type=1300 audit(1707765845.111:324): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeadf6790 a2=3 a3=1 items=0 ppid=1 pid=4305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:05.111000 audit[4305]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeadf6790 a2=3 a3=1 items=0 ppid=1 pid=4305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:05.120571 kernel: audit: type=1327 audit(1707765845.111:324): proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:05.111000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:05.119892 systemd-logind[1207]: New session 9 of user core. Feb 12 19:24:05.120324 systemd[1]: Started session-9.scope. Feb 12 19:24:05.125000 audit[4305]: USER_START pid=4305 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.126000 audit[4308]: CRED_ACQ pid=4308 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.131368 kernel: audit: type=1105 audit(1707765845.125:325): pid=4305 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.131504 kernel: audit: type=1103 audit(1707765845.126:326): pid=4308 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.191013 kubelet[2167]: I0212 19:24:05.190981 2167 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 12 19:24:05.192039 kubelet[2167]: I0212 19:24:05.192005 2167 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 12 19:24:05.248139 sshd[4305]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:05.248000 audit[4305]: USER_END pid=4305 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.251066 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:58098.service: Deactivated successfully. Feb 12 19:24:05.251943 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:24:05.252190 kernel: audit: type=1106 audit(1707765845.248:327): pid=4305 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.252250 kernel: audit: type=1104 audit(1707765845.248:328): pid=4305 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.248000 audit[4305]: CRED_DISP pid=4305 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:05.253152 systemd-logind[1207]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:24:05.254175 systemd-logind[1207]: Removed session 9. Feb 12 19:24:05.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.84:22-10.0.0.1:58098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:05.262419 kubelet[2167]: E0212 19:24:05.262344 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:05.941607 kubelet[2167]: I0212 19:24:05.941558 2167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:24:10.252075 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:58112.service. Feb 12 19:24:10.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.84:22-10.0.0.1:58112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:10.254561 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:24:10.254666 kernel: audit: type=1130 audit(1707765850.251:330): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.84:22-10.0.0.1:58112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:10.297000 audit[4362]: USER_ACCT pid=4362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.298569 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 58112 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:10.299970 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:10.297000 audit[4362]: CRED_ACQ pid=4362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.304574 kernel: audit: type=1101 audit(1707765850.297:331): pid=4362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.304670 kernel: audit: type=1103 audit(1707765850.297:332): pid=4362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.306429 kernel: audit: type=1006 audit(1707765850.297:333): pid=4362 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 12 19:24:10.297000 audit[4362]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd0b36980 a2=3 a3=1 items=0 ppid=1 pid=4362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:10.310564 kernel: audit: type=1300 audit(1707765850.297:333): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd0b36980 a2=3 a3=1 items=0 ppid=1 pid=4362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:10.297000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:10.311960 kernel: audit: type=1327 audit(1707765850.297:333): proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:10.313795 systemd[1]: Started session-10.scope. Feb 12 19:24:10.314228 systemd-logind[1207]: New session 10 of user core. Feb 12 19:24:10.320000 audit[4362]: USER_START pid=4362 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.320000 audit[4365]: CRED_ACQ pid=4365 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.327693 kernel: audit: type=1105 audit(1707765850.320:334): pid=4362 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.327808 kernel: audit: type=1103 audit(1707765850.320:335): pid=4365 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.473129 sshd[4362]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:10.473000 audit[4362]: USER_END pid=4362 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.476012 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:58112.service: Deactivated successfully. Feb 12 19:24:10.473000 audit[4362]: CRED_DISP pid=4362 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.476956 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:24:10.479108 kernel: audit: type=1106 audit(1707765850.473:336): pid=4362 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.479234 kernel: audit: type=1104 audit(1707765850.473:337): pid=4362 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:10.479700 systemd-logind[1207]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:24:10.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.84:22-10.0.0.1:58112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:10.480908 systemd-logind[1207]: Removed session 10. Feb 12 19:24:15.478881 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:58328.service. Feb 12 19:24:15.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.84:22-10.0.0.1:58328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:15.483734 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:24:15.483857 kernel: audit: type=1130 audit(1707765855.479:339): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.84:22-10.0.0.1:58328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:15.533000 audit[4386]: USER_ACCT pid=4386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.533888 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 58328 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:15.536193 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:15.535000 audit[4386]: CRED_ACQ pid=4386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.541257 systemd-logind[1207]: New session 11 of user core. Feb 12 19:24:15.541615 systemd[1]: Started session-11.scope. Feb 12 19:24:15.544764 kernel: audit: type=1101 audit(1707765855.533:340): pid=4386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.544854 kernel: audit: type=1103 audit(1707765855.535:341): pid=4386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.544877 kernel: audit: type=1006 audit(1707765855.535:342): pid=4386 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 12 19:24:15.535000 audit[4386]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd86b1720 a2=3 a3=1 items=0 ppid=1 pid=4386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:15.554766 kernel: audit: type=1300 audit(1707765855.535:342): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd86b1720 a2=3 a3=1 items=0 ppid=1 pid=4386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:15.535000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:15.555695 kernel: audit: type=1327 audit(1707765855.535:342): proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:15.555746 kernel: audit: type=1105 audit(1707765855.547:343): pid=4386 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.547000 audit[4386]: USER_START pid=4386 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.549000 audit[4389]: CRED_ACQ pid=4389 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.561344 kernel: audit: type=1103 audit(1707765855.549:344): pid=4389 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.686858 sshd[4386]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:15.689000 audit[4386]: USER_END pid=4386 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.691782 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:58342.service. Feb 12 19:24:15.692000 audit[4386]: CRED_DISP pid=4386 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.694934 kernel: audit: type=1106 audit(1707765855.689:345): pid=4386 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.695034 kernel: audit: type=1104 audit(1707765855.692:346): pid=4386 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.84:22-10.0.0.1:58342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:15.699448 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:58328.service: Deactivated successfully. Feb 12 19:24:15.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.84:22-10.0.0.1:58328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:15.701323 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:24:15.701884 systemd-logind[1207]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:24:15.702905 systemd-logind[1207]: Removed session 11. Feb 12 19:24:15.750000 audit[4399]: USER_ACCT pid=4399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.751397 sshd[4399]: Accepted publickey for core from 10.0.0.1 port 58342 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:15.754000 audit[4399]: CRED_ACQ pid=4399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.754000 audit[4399]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd5966310 a2=3 a3=1 items=0 ppid=1 pid=4399 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:15.754000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:15.754870 sshd[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:15.760151 systemd-logind[1207]: New session 12 of user core. Feb 12 19:24:15.760823 systemd[1]: Started session-12.scope. Feb 12 19:24:15.764000 audit[4399]: USER_START pid=4399 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:15.766000 audit[4404]: CRED_ACQ pid=4404 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:16.019099 sshd[4399]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:16.021018 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:58356.service. Feb 12 19:24:16.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.84:22-10.0.0.1:58356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:16.028000 audit[4399]: USER_END pid=4399 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:16.033000 audit[4399]: CRED_DISP pid=4399 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:16.038127 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:58342.service: Deactivated successfully. Feb 12 19:24:16.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.84:22-10.0.0.1:58342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:16.039736 systemd-logind[1207]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:24:16.039807 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:24:16.041137 systemd-logind[1207]: Removed session 12. Feb 12 19:24:16.073000 audit[4411]: USER_ACCT pid=4411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:16.074783 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 58356 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:16.075000 audit[4411]: CRED_ACQ pid=4411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:16.075000 audit[4411]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1594f10 a2=3 a3=1 items=0 ppid=1 pid=4411 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:16.075000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:16.076092 sshd[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:16.079574 systemd-logind[1207]: New session 13 of user core. Feb 12 19:24:16.080462 systemd[1]: Started session-13.scope. Feb 12 19:24:16.084000 audit[4411]: USER_START pid=4411 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:16.084000 audit[4416]: CRED_ACQ pid=4416 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:16.197538 sshd[4411]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:16.197000 audit[4411]: USER_END pid=4411 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:16.197000 audit[4411]: CRED_DISP pid=4411 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:16.200684 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:58356.service: Deactivated successfully. Feb 12 19:24:16.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.84:22-10.0.0.1:58356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:16.201708 systemd-logind[1207]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:24:16.201777 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:24:16.202790 systemd-logind[1207]: Removed session 13. Feb 12 19:24:17.054577 env[1229]: time="2024-02-12T19:24:17.054528259Z" level=info msg="StopPodSandbox for \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\"" Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.094 [WARNING][4443] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mm48t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"071e00bf-e137-4b01-b026-7d482f147f4e", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90", Pod:"csi-node-driver-mm48t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali502e9cad793", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.094 [INFO][4443] k8s.go 578: Cleaning up netns ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.094 [INFO][4443] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" iface="eth0" netns="" Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.094 [INFO][4443] k8s.go 585: Releasing IP address(es) ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.094 [INFO][4443] utils.go 188: Calico CNI releasing IP address ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.113 [INFO][4451] ipam_plugin.go 415: Releasing address using handleID ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" HandleID="k8s-pod-network.8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.113 [INFO][4451] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.113 [INFO][4451] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.127 [WARNING][4451] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" HandleID="k8s-pod-network.8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.127 [INFO][4451] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" HandleID="k8s-pod-network.8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.131 [INFO][4451] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:17.135828 env[1229]: 2024-02-12 19:24:17.134 [INFO][4443] k8s.go 591: Teardown processing complete. ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:17.136290 env[1229]: time="2024-02-12T19:24:17.135868785Z" level=info msg="TearDown network for sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\" successfully" Feb 12 19:24:17.136290 env[1229]: time="2024-02-12T19:24:17.135900587Z" level=info msg="StopPodSandbox for \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\" returns successfully" Feb 12 19:24:17.138471 env[1229]: time="2024-02-12T19:24:17.138432011Z" level=info msg="RemovePodSandbox for \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\"" Feb 12 19:24:17.138565 env[1229]: time="2024-02-12T19:24:17.138482093Z" level=info msg="Forcibly stopping sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\"" Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.178 [WARNING][4477] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mm48t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"071e00bf-e137-4b01-b026-7d482f147f4e", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c7088262799328d3504d192c8b4e278cb506ddefa6cd0b799353b1e442d3b90", Pod:"csi-node-driver-mm48t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali502e9cad793", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.179 [INFO][4477] k8s.go 578: Cleaning up netns ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.179 [INFO][4477] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" iface="eth0" netns="" Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.179 [INFO][4477] k8s.go 585: Releasing IP address(es) ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.179 [INFO][4477] utils.go 188: Calico CNI releasing IP address ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.197 [INFO][4484] ipam_plugin.go 415: Releasing address using handleID ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" HandleID="k8s-pod-network.8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.197 [INFO][4484] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.197 [INFO][4484] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.210 [WARNING][4484] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" HandleID="k8s-pod-network.8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.211 [INFO][4484] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" HandleID="k8s-pod-network.8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Workload="localhost-k8s-csi--node--driver--mm48t-eth0" Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.213 [INFO][4484] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:17.217060 env[1229]: 2024-02-12 19:24:17.215 [INFO][4477] k8s.go 591: Teardown processing complete. ContainerID="8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446" Feb 12 19:24:17.217582 env[1229]: time="2024-02-12T19:24:17.217116267Z" level=info msg="TearDown network for sandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\" successfully" Feb 12 19:24:17.253346 env[1229]: time="2024-02-12T19:24:17.253294956Z" level=info msg="RemovePodSandbox \"8b7475aba123276f9fd0814a07a05ca11cc6b25655951ef7bb9f78415403c446\" returns successfully" Feb 12 19:24:17.253814 env[1229]: time="2024-02-12T19:24:17.253790024Z" level=info msg="StopPodSandbox for \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\"" Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.306 [WARNING][4507] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--b8bqg-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c83bff97-aaac-4eac-a511-62a8fc342a57", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52", Pod:"coredns-787d4945fb-b8bqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e90f10ed16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.307 [INFO][4507] k8s.go 578: Cleaning up netns ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.307 [INFO][4507] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" iface="eth0" netns="" Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.307 [INFO][4507] k8s.go 585: Releasing IP address(es) ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.307 [INFO][4507] utils.go 188: Calico CNI releasing IP address ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.324 [INFO][4515] ipam_plugin.go 415: Releasing address using handleID ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" HandleID="k8s-pod-network.2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.325 [INFO][4515] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.325 [INFO][4515] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.335 [WARNING][4515] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" HandleID="k8s-pod-network.2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.335 [INFO][4515] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" HandleID="k8s-pod-network.2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.337 [INFO][4515] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:17.340079 env[1229]: 2024-02-12 19:24:17.338 [INFO][4507] k8s.go 591: Teardown processing complete. ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:17.340996 env[1229]: time="2024-02-12T19:24:17.340040109Z" level=info msg="TearDown network for sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\" successfully" Feb 12 19:24:17.341096 env[1229]: time="2024-02-12T19:24:17.341000363Z" level=info msg="StopPodSandbox for \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\" returns successfully" Feb 12 19:24:17.341713 env[1229]: time="2024-02-12T19:24:17.341687082Z" level=info msg="RemovePodSandbox for \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\"" Feb 12 19:24:17.341775 env[1229]: time="2024-02-12T19:24:17.341723284Z" level=info msg="Forcibly stopping sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\"" Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.377 [WARNING][4538] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--b8bqg-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c83bff97-aaac-4eac-a511-62a8fc342a57", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43a6b2187d801d7ff4413b59737e3990f9bbb0bc05329dcdadcb01319c317a52", Pod:"coredns-787d4945fb-b8bqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e90f10ed16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.377 [INFO][4538] k8s.go 578: Cleaning up netns ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.377 [INFO][4538] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" iface="eth0" netns="" Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.377 [INFO][4538] k8s.go 585: Releasing IP address(es) ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.378 [INFO][4538] utils.go 188: Calico CNI releasing IP address ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.397 [INFO][4546] ipam_plugin.go 415: Releasing address using handleID ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" HandleID="k8s-pod-network.2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.397 [INFO][4546] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.397 [INFO][4546] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.407 [WARNING][4546] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" HandleID="k8s-pod-network.2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.407 [INFO][4546] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" HandleID="k8s-pod-network.2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Workload="localhost-k8s-coredns--787d4945fb--b8bqg-eth0" Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.409 [INFO][4546] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:17.412172 env[1229]: 2024-02-12 19:24:17.410 [INFO][4538] k8s.go 591: Teardown processing complete. ContainerID="2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6" Feb 12 19:24:17.412649 env[1229]: time="2024-02-12T19:24:17.412228837Z" level=info msg="TearDown network for sandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\" successfully" Feb 12 19:24:17.439977 env[1229]: time="2024-02-12T19:24:17.439911605Z" level=info msg="RemovePodSandbox \"2607bc37bd7bf73e43ec22e5816046ee99cdd14de16391c70450d373ec08f6c6\" returns successfully" Feb 12 19:24:17.440417 env[1229]: time="2024-02-12T19:24:17.440378991Z" level=info msg="StopPodSandbox for \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\"" Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.478 [WARNING][4569] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0", GenerateName:"calico-kube-controllers-785d9f5779-", Namespace:"calico-system", SelfLink:"", UID:"031fa1cc-247a-40b4-ac9d-8e6f80abd15d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"785d9f5779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f", Pod:"calico-kube-controllers-785d9f5779-p69mv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7894daac8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.478 [INFO][4569] k8s.go 578: Cleaning up netns ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.478 [INFO][4569] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" iface="eth0" netns="" Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.478 [INFO][4569] k8s.go 585: Releasing IP address(es) ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.478 [INFO][4569] utils.go 188: Calico CNI releasing IP address ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.498 [INFO][4577] ipam_plugin.go 415: Releasing address using handleID ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" HandleID="k8s-pod-network.870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.498 [INFO][4577] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.498 [INFO][4577] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.510 [WARNING][4577] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" HandleID="k8s-pod-network.870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.510 [INFO][4577] ipam_plugin.go 443: Releasing address using workloadID ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" HandleID="k8s-pod-network.870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.512 [INFO][4577] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:17.515477 env[1229]: 2024-02-12 19:24:17.514 [INFO][4569] k8s.go 591: Teardown processing complete. ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:24:17.515946 env[1229]: time="2024-02-12T19:24:17.515507766Z" level=info msg="TearDown network for sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\" successfully" Feb 12 19:24:17.515946 env[1229]: time="2024-02-12T19:24:17.515547488Z" level=info msg="StopPodSandbox for \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\" returns successfully" Feb 12 19:24:17.516201 env[1229]: time="2024-02-12T19:24:17.516171684Z" level=info msg="RemovePodSandbox for \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\"" Feb 12 19:24:17.516325 env[1229]: time="2024-02-12T19:24:17.516287890Z" level=info msg="Forcibly stopping sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\"" Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.555 [WARNING][4600] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0", GenerateName:"calico-kube-controllers-785d9f5779-", Namespace:"calico-system", SelfLink:"", UID:"031fa1cc-247a-40b4-ac9d-8e6f80abd15d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"785d9f5779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa7a46022390c77d79e29a256d40bcb0d86d4bea0d3987c02c9e02b5021a956f", Pod:"calico-kube-controllers-785d9f5779-p69mv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7894daac8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.555 [INFO][4600] k8s.go 578: Cleaning up netns ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.555 [INFO][4600] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" iface="eth0" netns="" Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.555 [INFO][4600] k8s.go 585: Releasing IP address(es) ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.555 [INFO][4600] utils.go 188: Calico CNI releasing IP address ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.573 [INFO][4608] ipam_plugin.go 415: Releasing address using handleID ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" HandleID="k8s-pod-network.870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.573 [INFO][4608] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.573 [INFO][4608] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.585 [WARNING][4608] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" HandleID="k8s-pod-network.870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.585 [INFO][4608] ipam_plugin.go 443: Releasing address using workloadID ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" HandleID="k8s-pod-network.870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Workload="localhost-k8s-calico--kube--controllers--785d9f5779--p69mv-eth0" Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.586 [INFO][4608] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:17.589331 env[1229]: 2024-02-12 19:24:17.588 [INFO][4600] k8s.go 591: Teardown processing complete. ContainerID="870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9" Feb 12 19:24:17.589960 env[1229]: time="2024-02-12T19:24:17.589925381Z" level=info msg="TearDown network for sandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\" successfully" Feb 12 19:24:17.593136 env[1229]: time="2024-02-12T19:24:17.593027877Z" level=info msg="RemovePodSandbox \"870578bb8f645df01315a16c98a3df2700c3d48b12b5fdbc8b959e953b80d8c9\" returns successfully" Feb 12 19:24:17.594667 env[1229]: time="2024-02-12T19:24:17.594590725Z" level=info msg="StopPodSandbox for \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\"" Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.630 [WARNING][4631] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--jfr6q-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2456b22f-ee9c-4a55-886a-f4394cd661b0", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730", Pod:"coredns-787d4945fb-jfr6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d0e9045a8f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.630 [INFO][4631] k8s.go 578: Cleaning up netns ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.631 [INFO][4631] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" iface="eth0" netns="" Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.631 [INFO][4631] k8s.go 585: Releasing IP address(es) ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.631 [INFO][4631] utils.go 188: Calico CNI releasing IP address ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.648 [INFO][4638] ipam_plugin.go 415: Releasing address using handleID ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" HandleID="k8s-pod-network.88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.648 [INFO][4638] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.648 [INFO][4638] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.658 [WARNING][4638] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" HandleID="k8s-pod-network.88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.658 [INFO][4638] ipam_plugin.go 443: Releasing address using workloadID ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" HandleID="k8s-pod-network.88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.660 [INFO][4638] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:17.663055 env[1229]: 2024-02-12 19:24:17.661 [INFO][4631] k8s.go 591: Teardown processing complete. ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:24:17.663625 env[1229]: time="2024-02-12T19:24:17.663589593Z" level=info msg="TearDown network for sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\" successfully" Feb 12 19:24:17.663699 env[1229]: time="2024-02-12T19:24:17.663683758Z" level=info msg="StopPodSandbox for \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\" returns successfully" Feb 12 19:24:17.664292 env[1229]: time="2024-02-12T19:24:17.664254350Z" level=info msg="RemovePodSandbox for \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\"" Feb 12 19:24:17.664371 env[1229]: time="2024-02-12T19:24:17.664296873Z" level=info msg="Forcibly stopping sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\"" Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.715 [WARNING][4662] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--jfr6q-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2456b22f-ee9c-4a55-886a-f4394cd661b0", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ecefa3282d0b3d306794ffbaf9d90376c6e334ebe46c6c60f5e7b4e152276730", Pod:"coredns-787d4945fb-jfr6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d0e9045a8f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.715 [INFO][4662] k8s.go 578: Cleaning up netns ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.715 [INFO][4662] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" iface="eth0" netns="" Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.715 [INFO][4662] k8s.go 585: Releasing IP address(es) ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.715 [INFO][4662] utils.go 188: Calico CNI releasing IP address ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.739 [INFO][4670] ipam_plugin.go 415: Releasing address using handleID ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" HandleID="k8s-pod-network.88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.739 [INFO][4670] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.739 [INFO][4670] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.749 [WARNING][4670] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" HandleID="k8s-pod-network.88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.749 [INFO][4670] ipam_plugin.go 443: Releasing address using workloadID ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" HandleID="k8s-pod-network.88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Workload="localhost-k8s-coredns--787d4945fb--jfr6q-eth0" Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.751 [INFO][4670] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:17.754465 env[1229]: 2024-02-12 19:24:17.753 [INFO][4662] k8s.go 591: Teardown processing complete. ContainerID="88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3" Feb 12 19:24:17.754912 env[1229]: time="2024-02-12T19:24:17.754494661Z" level=info msg="TearDown network for sandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\" successfully" Feb 12 19:24:17.757286 env[1229]: time="2024-02-12T19:24:17.757207335Z" level=info msg="RemovePodSandbox \"88ee4d74e37d81e69bba7bfe6b6ad55ff327af303acbe4ed5e3b5e82564f17b3\" returns successfully" Feb 12 19:24:21.204221 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:24:21.204363 kernel: audit: type=1130 audit(1707765861.199:366): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.84:22-10.0.0.1:58368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:21.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.84:22-10.0.0.1:58368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:21.200649 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:58368.service. Feb 12 19:24:21.243000 audit[4682]: USER_ACCT pid=4682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.245045 sshd[4682]: Accepted publickey for core from 10.0.0.1 port 58368 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:21.246310 sshd[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:21.244000 audit[4682]: CRED_ACQ pid=4682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.249674 kernel: audit: type=1101 audit(1707765861.243:367): pid=4682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.249764 kernel: audit: type=1103 audit(1707765861.244:368): pid=4682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.251263 kernel: audit: type=1006 audit(1707765861.244:369): pid=4682 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 12 19:24:21.251347 kernel: audit: type=1300 audit(1707765861.244:369): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe6946c30 a2=3 a3=1 items=0 ppid=1 pid=4682 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.244000 audit[4682]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe6946c30 a2=3 a3=1 items=0 ppid=1 pid=4682 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.244000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:21.254923 kernel: audit: type=1327 audit(1707765861.244:369): proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:21.256650 systemd-logind[1207]: New session 14 of user core. Feb 12 19:24:21.257554 systemd[1]: Started session-14.scope. Feb 12 19:24:21.263000 audit[4682]: USER_START pid=4682 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.263000 audit[4685]: CRED_ACQ pid=4685 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.269866 kernel: audit: type=1105 audit(1707765861.263:370): pid=4682 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.269942 kernel: audit: type=1103 audit(1707765861.263:371): pid=4685 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.381701 sshd[4682]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:21.381000 audit[4682]: USER_END pid=4682 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.384978 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:58368.service: Deactivated successfully. Feb 12 19:24:21.382000 audit[4682]: CRED_DISP pid=4682 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.385981 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:24:21.386566 systemd-logind[1207]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:24:21.387503 systemd-logind[1207]: Removed session 14. Feb 12 19:24:21.388181 kernel: audit: type=1106 audit(1707765861.381:372): pid=4682 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.388240 kernel: audit: type=1104 audit(1707765861.382:373): pid=4682 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:21.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.84:22-10.0.0.1:58368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:26.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.84:22-10.0.0.1:40148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:26.388641 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:40148.service. Feb 12 19:24:26.389383 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:24:26.389422 kernel: audit: type=1130 audit(1707765866.387:375): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.84:22-10.0.0.1:40148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:26.434000 audit[4699]: USER_ACCT pid=4699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.435978 sshd[4699]: Accepted publickey for core from 10.0.0.1 port 40148 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:26.438075 sshd[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:26.436000 audit[4699]: CRED_ACQ pid=4699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.441152 kernel: audit: type=1101 audit(1707765866.434:376): pid=4699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.441227 kernel: audit: type=1103 audit(1707765866.436:377): pid=4699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.441272 kernel: audit: type=1006 audit(1707765866.436:378): pid=4699 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Feb 12 19:24:26.445961 kernel: audit: type=1300 audit(1707765866.436:378): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb83be80 a2=3 a3=1 items=0 ppid=1 pid=4699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:26.446047 kernel: audit: type=1327 audit(1707765866.436:378): proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:26.436000 audit[4699]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb83be80 a2=3 a3=1 items=0 ppid=1 pid=4699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:26.436000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:26.443479 systemd[1]: Started session-15.scope. Feb 12 19:24:26.444035 systemd-logind[1207]: New session 15 of user core. Feb 12 19:24:26.448000 audit[4699]: USER_START pid=4699 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.450000 audit[4702]: CRED_ACQ pid=4702 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.455965 kernel: audit: type=1105 audit(1707765866.448:379): pid=4699 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.456065 kernel: audit: type=1103 audit(1707765866.450:380): pid=4702 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.569046 sshd[4699]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:26.569000 audit[4699]: USER_END pid=4699 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.572699 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:40148.service: Deactivated successfully. Feb 12 19:24:26.573953 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:24:26.569000 audit[4699]: CRED_DISP pid=4699 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.574483 systemd-logind[1207]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:24:26.576622 kernel: audit: type=1106 audit(1707765866.569:381): pid=4699 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.576787 kernel: audit: type=1104 audit(1707765866.569:382): pid=4699 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:26.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.84:22-10.0.0.1:40148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:26.576131 systemd-logind[1207]: Removed session 15. Feb 12 19:24:31.573528 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:40162.service. Feb 12 19:24:31.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.84:22-10.0.0.1:40162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:31.574593 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:24:31.574721 kernel: audit: type=1130 audit(1707765871.572:384): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.84:22-10.0.0.1:40162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:31.621000 audit[4715]: USER_ACCT pid=4715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.622045 sshd[4715]: Accepted publickey for core from 10.0.0.1 port 40162 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:31.623672 sshd[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:31.622000 audit[4715]: CRED_ACQ pid=4715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.631739 kernel: audit: type=1101 audit(1707765871.621:385): pid=4715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.631862 kernel: audit: type=1103 audit(1707765871.622:386): pid=4715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.631893 kernel: audit: type=1006 audit(1707765871.622:387): pid=4715 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Feb 12 19:24:31.634062 kernel: audit: type=1300 audit(1707765871.622:387): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1bd1b20 a2=3 a3=1 items=0 ppid=1 pid=4715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:31.622000 audit[4715]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1bd1b20 a2=3 a3=1 items=0 ppid=1 pid=4715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:31.634303 systemd-logind[1207]: New session 16 of user core. Feb 12 19:24:31.635172 systemd[1]: Started session-16.scope. Feb 12 19:24:31.638580 kernel: audit: type=1327 audit(1707765871.622:387): proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:31.622000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:31.638000 audit[4715]: USER_START pid=4715 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.643374 kernel: audit: type=1105 audit(1707765871.638:388): pid=4715 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.643511 kernel: audit: type=1103 audit(1707765871.640:389): pid=4718 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.640000 audit[4718]: CRED_ACQ pid=4718 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.758314 sshd[4715]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:31.758000 audit[4715]: USER_END pid=4715 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.761335 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:40162.service: Deactivated successfully. Feb 12 19:24:31.758000 audit[4715]: CRED_DISP pid=4715 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.762619 systemd-logind[1207]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:24:31.762687 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:24:31.764217 systemd-logind[1207]: Removed session 16. Feb 12 19:24:31.764659 kernel: audit: type=1106 audit(1707765871.758:390): pid=4715 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.764714 kernel: audit: type=1104 audit(1707765871.758:391): pid=4715 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:31.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.84:22-10.0.0.1:40162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:34.825012 systemd[1]: run-containerd-runc-k8s.io-30a3db6055831aa59665c0d8d5107943412162d3bb5c79ce53f0f42e72d62ca9-runc.V7OYS6.mount: Deactivated successfully. Feb 12 19:24:36.125814 kubelet[2167]: E0212 19:24:36.125692 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:36.559232 kubelet[2167]: I0212 19:24:36.559197 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:36.613000 audit[4827]: NETFILTER_CFG table=filter:126 family=2 entries=7 op=nft_register_rule pid=4827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.616510 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:24:36.616615 kernel: audit: type=1325 audit(1707765876.613:393): table=filter:126 family=2 entries=7 op=nft_register_rule pid=4827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.616663 kernel: audit: type=1300 audit(1707765876.613:393): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffe6653d30 a2=0 a3=ffff918786c0 items=0 ppid=2337 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.613000 audit[4827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffe6653d30 a2=0 a3=ffff918786c0 items=0 ppid=2337 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.613000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.621510 kernel: audit: type=1327 audit(1707765876.613:393): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.613000 audit[4827]: NETFILTER_CFG table=nat:127 family=2 entries=78 op=nft_register_rule pid=4827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.613000 audit[4827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe6653d30 a2=0 a3=ffff918786c0 items=0 ppid=2337 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.627472 kernel: audit: type=1325 audit(1707765876.613:394): table=nat:127 family=2 entries=78 op=nft_register_rule pid=4827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.627559 kernel: audit: type=1300 audit(1707765876.613:394): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe6653d30 a2=0 a3=ffff918786c0 items=0 ppid=2337 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.627580 kernel: audit: type=1327 audit(1707765876.613:394): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.613000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.633278 kubelet[2167]: I0212 19:24:36.633251 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvn25\" (UniqueName: \"kubernetes.io/projected/e95c0b81-1c9b-4e26-8ee0-28789cbb311a-kube-api-access-fvn25\") pod \"calico-apiserver-f9f7fb78c-6bg5h\" (UID: \"e95c0b81-1c9b-4e26-8ee0-28789cbb311a\") " pod="calico-apiserver/calico-apiserver-f9f7fb78c-6bg5h" Feb 12 19:24:36.633563 kubelet[2167]: I0212 19:24:36.633545 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e95c0b81-1c9b-4e26-8ee0-28789cbb311a-calico-apiserver-certs\") pod \"calico-apiserver-f9f7fb78c-6bg5h\" (UID: \"e95c0b81-1c9b-4e26-8ee0-28789cbb311a\") " pod="calico-apiserver/calico-apiserver-f9f7fb78c-6bg5h" Feb 12 19:24:36.660000 audit[4853]: NETFILTER_CFG table=filter:128 family=2 entries=8 op=nft_register_rule pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.660000 audit[4853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffffdf84f50 a2=0 a3=ffff8dd586c0 items=0 ppid=2337 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.666608 kernel: audit: type=1325 audit(1707765876.660:395): table=filter:128 family=2 entries=8 op=nft_register_rule pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.666727 kernel: audit: type=1300 audit(1707765876.660:395): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffffdf84f50 a2=0 a3=ffff8dd586c0 items=0 ppid=2337 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.666746 kernel: audit: type=1327 audit(1707765876.660:395): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.662000 audit[4853]: NETFILTER_CFG table=nat:129 family=2 entries=78 op=nft_register_rule pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.662000 audit[4853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffffdf84f50 a2=0 a3=ffff8dd586c0 items=0 ppid=2337 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.671151 kernel: audit: type=1325 audit(1707765876.662:396): table=nat:129 family=2 entries=78 op=nft_register_rule pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.662000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.735049 kubelet[2167]: E0212 19:24:36.735015 2167 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 12 19:24:36.735500 kubelet[2167]: E0212 19:24:36.735476 2167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e95c0b81-1c9b-4e26-8ee0-28789cbb311a-calico-apiserver-certs podName:e95c0b81-1c9b-4e26-8ee0-28789cbb311a nodeName:}" failed. No retries permitted until 2024-02-12 19:24:37.235251343 +0000 UTC m=+80.276301815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e95c0b81-1c9b-4e26-8ee0-28789cbb311a-calico-apiserver-certs") pod "calico-apiserver-f9f7fb78c-6bg5h" (UID: "e95c0b81-1c9b-4e26-8ee0-28789cbb311a") : secret "calico-apiserver-certs" not found Feb 12 19:24:36.761050 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:54572.service. Feb 12 19:24:36.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.84:22-10.0.0.1:54572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:36.802200 sshd[4855]: Accepted publickey for core from 10.0.0.1 port 54572 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:36.801000 audit[4855]: USER_ACCT pid=4855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:36.803000 audit[4855]: CRED_ACQ pid=4855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:36.803000 audit[4855]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe62ccb20 a2=3 a3=1 items=0 ppid=1 pid=4855 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.803000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:36.804709 sshd[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:36.810805 systemd[1]: Started session-17.scope. Feb 12 19:24:36.811636 systemd-logind[1207]: New session 17 of user core. Feb 12 19:24:36.816000 audit[4855]: USER_START pid=4855 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:36.817000 audit[4858]: CRED_ACQ pid=4858 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:36.931365 sshd[4855]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:36.931000 audit[4855]: USER_END pid=4855 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:36.931000 audit[4855]: CRED_DISP pid=4855 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:36.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.84:22-10.0.0.1:54578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:36.933762 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:54578.service. Feb 12 19:24:36.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.84:22-10.0.0.1:54572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:36.935337 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:54572.service: Deactivated successfully. Feb 12 19:24:36.936408 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:24:36.936455 systemd-logind[1207]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:24:36.938329 systemd-logind[1207]: Removed session 17. Feb 12 19:24:36.973000 audit[4867]: USER_ACCT pid=4867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:36.974923 sshd[4867]: Accepted publickey for core from 10.0.0.1 port 54578 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:36.974000 audit[4867]: CRED_ACQ pid=4867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:36.974000 audit[4867]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd8d5e00 a2=3 a3=1 items=0 ppid=1 pid=4867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.974000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:36.976186 sshd[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:36.979937 systemd-logind[1207]: New session 18 of user core. Feb 12 19:24:36.980797 systemd[1]: Started session-18.scope. Feb 12 19:24:36.985000 audit[4867]: USER_START pid=4867 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:36.986000 audit[4872]: CRED_ACQ pid=4872 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:37.234037 sshd[4867]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:37.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.84:22-10.0.0.1:54586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:37.236430 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:54586.service. Feb 12 19:24:37.236000 audit[4867]: USER_END pid=4867 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:37.236000 audit[4867]: CRED_DISP pid=4867 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:37.239272 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:54578.service: Deactivated successfully. Feb 12 19:24:37.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.84:22-10.0.0.1:54578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:37.240620 systemd-logind[1207]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:24:37.240651 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:24:37.241400 systemd-logind[1207]: Removed session 18. Feb 12 19:24:37.281000 audit[4879]: USER_ACCT pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:37.283170 sshd[4879]: Accepted publickey for core from 10.0.0.1 port 54586 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:37.282000 audit[4879]: CRED_ACQ pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:37.283000 audit[4879]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe0526920 a2=3 a3=1 items=0 ppid=1 pid=4879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:37.283000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:37.284434 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:37.287980 systemd-logind[1207]: New session 19 of user core. Feb 12 19:24:37.289686 systemd[1]: Started session-19.scope. Feb 12 19:24:37.292000 audit[4879]: USER_START pid=4879 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:37.293000 audit[4885]: CRED_ACQ pid=4885 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:37.463443 env[1229]: time="2024-02-12T19:24:37.462995732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9f7fb78c-6bg5h,Uid:e95c0b81-1c9b-4e26-8ee0-28789cbb311a,Namespace:calico-apiserver,Attempt:0,}" Feb 12 19:24:37.708307 systemd-networkd[1105]: caliaafafe212c5: Link UP Feb 12 19:24:37.710151 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:24:37.710241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaafafe212c5: link becomes ready Feb 12 19:24:37.709959 systemd-networkd[1105]: caliaafafe212c5: Gained carrier Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.608 [INFO][4892] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0 calico-apiserver-f9f7fb78c- calico-apiserver e95c0b81-1c9b-4e26-8ee0-28789cbb311a 1035 0 2024-02-12 19:24:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f9f7fb78c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f9f7fb78c-6bg5h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaafafe212c5 [] []}} ContainerID="c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-6bg5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.608 [INFO][4892] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-6bg5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.638 [INFO][4905] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" HandleID="k8s-pod-network.c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" Workload="localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.655 [INFO][4905] ipam_plugin.go 268: Auto assigning IP ContainerID="c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" HandleID="k8s-pod-network.c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" Workload="localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000293830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f9f7fb78c-6bg5h", "timestamp":"2024-02-12 19:24:37.638760031 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.655 [INFO][4905] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.655 [INFO][4905] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.655 [INFO][4905] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.660 [INFO][4905] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" host="localhost" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.667 [INFO][4905] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.673 [INFO][4905] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.675 [INFO][4905] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.678 [INFO][4905] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.678 [INFO][4905] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" host="localhost" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.685 [INFO][4905] ipam.go 1682: Creating new handle: k8s-pod-network.c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.698 [INFO][4905] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" host="localhost" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.704 [INFO][4905] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" host="localhost" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.704 [INFO][4905] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" host="localhost" Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.704 [INFO][4905] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:37.720434 env[1229]: 2024-02-12 19:24:37.704 [INFO][4905] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" HandleID="k8s-pod-network.c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" Workload="localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0" Feb 12 19:24:37.721011 env[1229]: 2024-02-12 19:24:37.706 [INFO][4892] k8s.go 385: Populated endpoint ContainerID="c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-6bg5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0", GenerateName:"calico-apiserver-f9f7fb78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e95c0b81-1c9b-4e26-8ee0-28789cbb311a", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9f7fb78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f9f7fb78c-6bg5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaafafe212c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:37.721011 env[1229]: 2024-02-12 19:24:37.706 [INFO][4892] k8s.go 386: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-6bg5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0" Feb 12 19:24:37.721011 env[1229]: 2024-02-12 19:24:37.706 [INFO][4892] dataplane_linux.go 68: Setting the host side veth name to caliaafafe212c5 ContainerID="c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-6bg5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0" Feb 12 19:24:37.721011 env[1229]: 2024-02-12 19:24:37.708 [INFO][4892] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-6bg5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0" Feb 12 19:24:37.721011 env[1229]: 2024-02-12 19:24:37.710 [INFO][4892] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-6bg5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0", GenerateName:"calico-apiserver-f9f7fb78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e95c0b81-1c9b-4e26-8ee0-28789cbb311a", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9f7fb78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad", Pod:"calico-apiserver-f9f7fb78c-6bg5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaafafe212c5", MAC:"9a:38:96:46:6d:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:37.721011 env[1229]: 2024-02-12 19:24:37.716 [INFO][4892] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-6bg5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--f9f7fb78c--6bg5h-eth0" Feb 12 19:24:37.738140 env[1229]: time="2024-02-12T19:24:37.737438489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:37.738140 env[1229]: time="2024-02-12T19:24:37.737498167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:37.738140 env[1229]: time="2024-02-12T19:24:37.737514287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:37.738140 env[1229]: time="2024-02-12T19:24:37.737653124Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad pid=4940 runtime=io.containerd.runc.v2 Feb 12 19:24:37.757000 audit[4960]: NETFILTER_CFG table=filter:130 family=2 entries=59 op=nft_register_chain pid=4960 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:37.757000 audit[4960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29292 a0=3 a1=ffffc7ba1ee0 a2=0 a3=ffff80faefa8 items=0 ppid=3299 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:37.757000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:37.808306 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:24:37.834783 env[1229]: time="2024-02-12T19:24:37.834727219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9f7fb78c-6bg5h,Uid:e95c0b81-1c9b-4e26-8ee0-28789cbb311a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad\"" Feb 12 19:24:37.835999 env[1229]: time="2024-02-12T19:24:37.835972710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 12 19:24:39.245365 sshd[4879]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:39.247000 audit[4879]: USER_END pid=4879 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.247000 audit[4879]: CRED_DISP pid=4879 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.84:22-10.0.0.1:54592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:39.248891 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:54592.service. Feb 12 19:24:39.251495 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:54586.service: Deactivated successfully. Feb 12 19:24:39.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.84:22-10.0.0.1:54586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:39.252765 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:24:39.253172 systemd-logind[1207]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:24:39.253965 systemd-logind[1207]: Removed session 19. Feb 12 19:24:39.295000 audit[4988]: USER_ACCT pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.296505 sshd[4988]: Accepted publickey for core from 10.0.0.1 port 54592 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:39.296000 audit[4988]: CRED_ACQ pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.296000 audit[4988]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe12e2d00 a2=3 a3=1 items=0 ppid=1 pid=4988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:39.296000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:39.298055 sshd[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:39.302821 systemd[1]: Started session-20.scope. Feb 12 19:24:39.303181 systemd-logind[1207]: New session 20 of user core. Feb 12 19:24:39.307000 audit[4988]: USER_START pid=4988 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.308000 audit[5010]: CRED_ACQ pid=5010 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.314000 audit[5011]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5011 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:39.314000 audit[5011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=fffffe9bafa0 a2=0 a3=ffff824256c0 items=0 ppid=2337 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:39.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:39.316000 audit[5011]: NETFILTER_CFG table=nat:132 family=2 entries=78 op=nft_register_rule pid=5011 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:39.316000 audit[5011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffffe9bafa0 a2=0 a3=ffff824256c0 items=0 ppid=2337 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:39.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:39.358000 audit[5038]: NETFILTER_CFG table=filter:133 family=2 entries=32 op=nft_register_rule pid=5038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:39.358000 audit[5038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffedab5e60 a2=0 a3=ffffb2ad46c0 items=0 ppid=2337 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:39.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:39.359000 audit[5038]: NETFILTER_CFG table=nat:134 family=2 entries=78 op=nft_register_rule pid=5038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:39.359000 audit[5038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffedab5e60 a2=0 a3=ffffb2ad46c0 items=0 ppid=2337 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:39.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:39.625214 systemd-networkd[1105]: caliaafafe212c5: Gained IPv6LL Feb 12 19:24:39.657075 sshd[4988]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:39.658293 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:54598.service. Feb 12 19:24:39.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.84:22-10.0.0.1:54598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:39.661000 audit[4988]: USER_END pid=4988 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.664000 audit[4988]: CRED_DISP pid=4988 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.666718 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:54592.service: Deactivated successfully. Feb 12 19:24:39.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.84:22-10.0.0.1:54592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:39.667729 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:24:39.670061 systemd-logind[1207]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:24:39.671061 systemd-logind[1207]: Removed session 20. Feb 12 19:24:39.719000 audit[5044]: USER_ACCT pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.721153 sshd[5044]: Accepted publickey for core from 10.0.0.1 port 54598 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:39.721000 audit[5044]: CRED_ACQ pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.721000 audit[5044]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff15e9160 a2=3 a3=1 items=0 ppid=1 pid=5044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:39.721000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:39.723175 sshd[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:39.732122 systemd[1]: Started session-21.scope. Feb 12 19:24:39.732124 systemd-logind[1207]: New session 21 of user core. Feb 12 19:24:39.735000 audit[5044]: USER_START pid=5044 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.736000 audit[5049]: CRED_ACQ pid=5049 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.991686 sshd[5044]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:39.993000 audit[5044]: USER_END pid=5044 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.993000 audit[5044]: CRED_DISP pid=5044 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:39.996149 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:54598.service: Deactivated successfully. Feb 12 19:24:39.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.84:22-10.0.0.1:54598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:39.997399 systemd-logind[1207]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:24:39.997465 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:24:39.998445 systemd-logind[1207]: Removed session 21. Feb 12 19:24:40.646630 env[1229]: time="2024-02-12T19:24:40.646568043Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:40.648850 env[1229]: time="2024-02-12T19:24:40.648810682Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:40.651042 env[1229]: time="2024-02-12T19:24:40.651003123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:40.653843 env[1229]: time="2024-02-12T19:24:40.653796993Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:40.654491 env[1229]: time="2024-02-12T19:24:40.654452381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 12 19:24:40.656860 env[1229]: time="2024-02-12T19:24:40.656811579Z" level=info msg="CreateContainer within sandbox \"c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 12 19:24:40.674290 env[1229]: time="2024-02-12T19:24:40.674231066Z" level=info msg="CreateContainer within sandbox \"c7d768dedd147ea65c67e6ea689e4d4e289a729047e036ba028410c70b156cad\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dfbb08097cb7ffec336ebe5ffa500da09b7e1cf34723df10c65857c2083ed080\"" Feb 12 19:24:40.675106 env[1229]: time="2024-02-12T19:24:40.675052852Z" level=info msg="StartContainer for \"dfbb08097cb7ffec336ebe5ffa500da09b7e1cf34723df10c65857c2083ed080\"" Feb 12 19:24:40.757838 env[1229]: time="2024-02-12T19:24:40.757787248Z" level=info msg="StartContainer for \"dfbb08097cb7ffec336ebe5ffa500da09b7e1cf34723df10c65857c2083ed080\" returns successfully" Feb 12 19:24:41.104000 audit[5124]: NETFILTER_CFG table=filter:135 family=2 entries=32 op=nft_register_rule pid=5124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:41.104000 audit[5124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffd498a9c0 a2=0 a3=ffff80ea66c0 items=0 ppid=2337 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:41.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:41.106000 audit[5124]: NETFILTER_CFG table=nat:136 family=2 entries=78 op=nft_register_rule pid=5124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:41.106000 audit[5124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd498a9c0 a2=0 a3=ffff80ea66c0 items=0 ppid=2337 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:41.106000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:41.359782 kubelet[2167]: I0212 19:24:41.359667 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f9f7fb78c-6bg5h" podStartSLOduration=-9.223372031495148e+09 pod.CreationTimestamp="2024-02-12 19:24:36 +0000 UTC" firstStartedPulling="2024-02-12 19:24:37.835726915 +0000 UTC m=+80.876777387" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:41.358391198 +0000 UTC m=+84.399441710" watchObservedRunningTime="2024-02-12 19:24:41.359628178 +0000 UTC m=+84.400678690" Feb 12 19:24:41.418000 audit[5150]: NETFILTER_CFG table=filter:137 family=2 entries=32 op=nft_register_rule pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:41.418000 audit[5150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffd9fba700 a2=0 a3=ffffad0cb6c0 items=0 ppid=2337 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:41.418000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:41.420000 audit[5150]: NETFILTER_CFG table=nat:138 family=2 entries=78 op=nft_register_rule pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:41.420000 audit[5150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd9fba700 a2=0 a3=ffffad0cb6c0 items=0 ppid=2337 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:41.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:44.994845 systemd[1]: Started sshd@21-10.0.0.84:22-10.0.0.1:41574.service. Feb 12 19:24:44.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.84:22-10.0.0.1:41574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:44.996124 kernel: kauditd_printk_skb: 84 callbacks suppressed Feb 12 19:24:44.996187 kernel: audit: type=1130 audit(1707765884.993:451): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.84:22-10.0.0.1:41574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:45.051856 sshd[5154]: Accepted publickey for core from 10.0.0.1 port 41574 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:45.051000 audit[5154]: USER_ACCT pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:45.055013 sshd[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:45.055382 kernel: audit: type=1101 audit(1707765885.051:452): pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:45.055421 kernel: audit: type=1103 audit(1707765885.054:453): pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:45.054000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:45.059013 kernel: audit: type=1006 audit(1707765885.054:454): pid=5154 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Feb 12 19:24:45.059102 kernel: audit: type=1300 audit(1707765885.054:454): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcfc872f0 a2=3 a3=1 items=0 ppid=1 pid=5154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:45.054000 audit[5154]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcfc872f0 a2=3 a3=1 items=0 ppid=1 pid=5154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:45.054000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:45.063497 kernel: audit: type=1327 audit(1707765885.054:454): proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:45.064913 systemd-logind[1207]: New session 22 of user core. Feb 12 19:24:45.065801 systemd[1]: Started session-22.scope. Feb 12 19:24:45.069000 audit[5154]: USER_START pid=5154 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:45.072000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:45.078127 kernel: audit: type=1105 audit(1707765885.069:455): pid=5154 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:45.078238 kernel: audit: type=1103 audit(1707765885.072:456): pid=5157 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:45.146647 kernel: audit: type=1325 audit(1707765885.140:457): table=filter:139 family=2 entries=20 op=nft_register_rule pid=5191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:45.146792 kernel: audit: type=1300 audit(1707765885.140:457): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffc107cf00 a2=0 a3=ffff8e8466c0 items=0 ppid=2337 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:45.140000 audit[5191]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=5191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:45.140000 audit[5191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffc107cf00 a2=0 a3=ffff8e8466c0 items=0 ppid=2337 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:45.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:45.144000 audit[5191]: NETFILTER_CFG table=nat:140 family=2 entries=162 op=nft_register_chain pid=5191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:45.144000 audit[5191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffc107cf00 a2=0 a3=ffff8e8466c0 items=0 ppid=2337 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:45.144000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:45.232316 sshd[5154]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:45.232000 audit[5154]: USER_END pid=5154 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:45.232000 audit[5154]: CRED_DISP pid=5154 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:45.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.84:22-10.0.0.1:41574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:45.234907 systemd[1]: sshd@21-10.0.0.84:22-10.0.0.1:41574.service: Deactivated successfully. Feb 12 19:24:45.236278 systemd-logind[1207]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:24:45.236492 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:24:45.237845 systemd-logind[1207]: Removed session 22. Feb 12 19:24:47.125474 kubelet[2167]: E0212 19:24:47.125076 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:49.125304 kubelet[2167]: E0212 19:24:49.125264 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:50.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.84:22-10.0.0.1:41580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:50.234466 systemd[1]: Started sshd@22-10.0.0.84:22-10.0.0.1:41580.service. Feb 12 19:24:50.235134 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 12 19:24:50.235187 kernel: audit: type=1130 audit(1707765890.234:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.84:22-10.0.0.1:41580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:50.271000 audit[5198]: USER_ACCT pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.272228 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 41580 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:50.273230 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:50.272000 audit[5198]: CRED_ACQ pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.276496 kernel: audit: type=1101 audit(1707765890.271:463): pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.276556 kernel: audit: type=1103 audit(1707765890.272:464): pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.277765 kernel: audit: type=1006 audit(1707765890.272:465): pid=5198 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 12 19:24:50.277807 kernel: audit: type=1300 audit(1707765890.272:465): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6294c50 a2=3 a3=1 items=0 ppid=1 pid=5198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:50.272000 audit[5198]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6294c50 a2=3 a3=1 items=0 ppid=1 pid=5198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:50.276747 systemd-logind[1207]: New session 23 of user core. Feb 12 19:24:50.277598 systemd[1]: Started session-23.scope. Feb 12 19:24:50.272000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:50.280905 kernel: audit: type=1327 audit(1707765890.272:465): proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:50.280961 kernel: audit: type=1105 audit(1707765890.280:466): pid=5198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.280000 audit[5198]: USER_START pid=5198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.283810 kernel: audit: type=1103 audit(1707765890.281:467): pid=5201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.281000 audit[5201]: CRED_ACQ pid=5201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.406690 sshd[5198]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:50.407000 audit[5198]: USER_END pid=5198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.409479 systemd-logind[1207]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:24:50.409699 systemd[1]: sshd@22-10.0.0.84:22-10.0.0.1:41580.service: Deactivated successfully. Feb 12 19:24:50.407000 audit[5198]: CRED_DISP pid=5198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.410801 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:24:50.411221 systemd-logind[1207]: Removed session 23. Feb 12 19:24:50.412659 kernel: audit: type=1106 audit(1707765890.407:468): pid=5198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.412733 kernel: audit: type=1104 audit(1707765890.407:469): pid=5198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:50.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.84:22-10.0.0.1:41580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:55.126582 kubelet[2167]: E0212 19:24:55.126518 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:55.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.84:22-10.0.0.1:50818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:55.409701 systemd[1]: Started sshd@23-10.0.0.84:22-10.0.0.1:50818.service. Feb 12 19:24:55.410778 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:24:55.410826 kernel: audit: type=1130 audit(1707765895.409:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.84:22-10.0.0.1:50818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:55.447000 audit[5214]: USER_ACCT pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.447406 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 50818 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:24:55.449052 sshd[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:55.448000 audit[5214]: CRED_ACQ pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.452811 kernel: audit: type=1101 audit(1707765895.447:472): pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.452894 kernel: audit: type=1103 audit(1707765895.448:473): pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.452924 kernel: audit: type=1006 audit(1707765895.448:474): pid=5214 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Feb 12 19:24:55.454305 kernel: audit: type=1300 audit(1707765895.448:474): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd3b798a0 a2=3 a3=1 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:55.448000 audit[5214]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd3b798a0 a2=3 a3=1 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:55.455290 systemd[1]: Started session-24.scope. Feb 12 19:24:55.456282 systemd-logind[1207]: New session 24 of user core. Feb 12 19:24:55.448000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:55.458122 kernel: audit: type=1327 audit(1707765895.448:474): proctitle=737368643A20636F7265205B707269765D Feb 12 19:24:55.460000 audit[5214]: USER_START pid=5214 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.461000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.465654 kernel: audit: type=1105 audit(1707765895.460:475): pid=5214 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.465713 kernel: audit: type=1103 audit(1707765895.461:476): pid=5217 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.584157 sshd[5214]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:55.584000 audit[5214]: USER_END pid=5214 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.585000 audit[5214]: CRED_DISP pid=5214 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.587858 systemd-logind[1207]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:24:55.588637 systemd[1]: sshd@23-10.0.0.84:22-10.0.0.1:50818.service: Deactivated successfully. Feb 12 19:24:55.589660 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:24:55.589886 kernel: audit: type=1106 audit(1707765895.584:477): pid=5214 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.589933 kernel: audit: type=1104 audit(1707765895.585:478): pid=5214 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:24:55.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.84:22-10.0.0.1:50818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:24:55.590938 systemd-logind[1207]: Removed session 24. Feb 12 19:24:56.125266 kubelet[2167]: E0212 19:24:56.125217 2167 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:25:00.587839 systemd[1]: Started sshd@24-10.0.0.84:22-10.0.0.1:50832.service. Feb 12 19:25:00.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.84:22-10.0.0.1:50832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:25:00.591070 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 19:25:00.591179 kernel: audit: type=1130 audit(1707765900.587:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.84:22-10.0.0.1:50832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:25:00.627000 audit[5230]: USER_ACCT pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.627408 sshd[5230]: Accepted publickey for core from 10.0.0.1 port 50832 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:25:00.629146 sshd[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:25:00.628000 audit[5230]: CRED_ACQ pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.633206 kernel: audit: type=1101 audit(1707765900.627:481): pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.633290 kernel: audit: type=1103 audit(1707765900.628:482): pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.635276 kernel: audit: type=1006 audit(1707765900.628:483): pid=5230 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 12 19:25:00.628000 audit[5230]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeee53940 a2=3 a3=1 items=0 ppid=1 pid=5230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:25:00.638040 kernel: audit: type=1300 audit(1707765900.628:483): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeee53940 a2=3 a3=1 items=0 ppid=1 pid=5230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:25:00.628000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:25:00.639300 kernel: audit: type=1327 audit(1707765900.628:483): proctitle=737368643A20636F7265205B707269765D Feb 12 19:25:00.642419 systemd-logind[1207]: New session 25 of user core. Feb 12 19:25:00.643475 systemd[1]: Started session-25.scope. Feb 12 19:25:00.652000 audit[5230]: USER_START pid=5230 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.652000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.659254 kernel: audit: type=1105 audit(1707765900.652:484): pid=5230 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.659349 kernel: audit: type=1103 audit(1707765900.652:485): pid=5233 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.797162 sshd[5230]: pam_unix(sshd:session): session closed for user core Feb 12 19:25:00.798000 audit[5230]: USER_END pid=5230 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.799710 systemd[1]: sshd@24-10.0.0.84:22-10.0.0.1:50832.service: Deactivated successfully. Feb 12 19:25:00.800677 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:25:00.798000 audit[5230]: CRED_DISP pid=5230 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.803979 kernel: audit: type=1106 audit(1707765900.798:486): pid=5230 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.804041 kernel: audit: type=1104 audit(1707765900.798:487): pid=5230 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:25:00.804049 systemd-logind[1207]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:25:00.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.84:22-10.0.0.1:50832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:25:00.804845 systemd-logind[1207]: Removed session 25.