Jun 25 14:14:37.845507 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 14:14:37.845527 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT Tue Jun 25 13:19:44 -00 2024 Jun 25 14:14:37.845536 kernel: efi: EFI v2.70 by EDK II Jun 25 14:14:37.845542 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9210018 MEMRESERVE=0xd9523d18 Jun 25 14:14:37.845547 kernel: random: crng init done Jun 25 14:14:37.845553 kernel: ACPI: Early table checksum verification disabled Jun 25 14:14:37.845559 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jun 25 14:14:37.845566 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jun 25 14:14:37.845572 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:14:37.845577 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:14:37.845583 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:14:37.845588 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:14:37.845593 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:14:37.845605 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:14:37.845614 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:14:37.845620 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:14:37.845626 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:14:37.845632 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jun 25 14:14:37.845638 kernel: NUMA: Failed to initialise from firmware Jun 25 14:14:37.845643 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 14:14:37.845649 kernel: NUMA: NODE_DATA [mem 0xdcb07800-0xdcb0cfff] Jun 25 14:14:37.845655 kernel: Zone ranges: Jun 25 14:14:37.845660 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 14:14:37.845667 kernel: DMA32 empty Jun 25 14:14:37.845673 kernel: Normal empty Jun 25 14:14:37.845678 kernel: Movable zone start for each node Jun 25 14:14:37.845684 kernel: Early memory node ranges Jun 25 14:14:37.845690 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jun 25 14:14:37.845695 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jun 25 14:14:37.845701 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jun 25 14:14:37.845706 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jun 25 14:14:37.845712 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jun 25 14:14:37.845718 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jun 25 14:14:37.845723 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jun 25 14:14:37.845729 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 14:14:37.845736 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jun 25 14:14:37.845741 kernel: psci: probing for conduit method from ACPI. Jun 25 14:14:37.845747 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 14:14:37.845753 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 14:14:37.845759 kernel: psci: Trusted OS migration not required Jun 25 14:14:37.845768 kernel: psci: SMC Calling Convention v1.1 Jun 25 14:14:37.845775 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 25 14:14:37.845788 kernel: percpu: Embedded 30 pages/cpu s83880 r8192 d30808 u122880 Jun 25 14:14:37.845794 kernel: pcpu-alloc: s83880 r8192 d30808 u122880 alloc=30*4096 Jun 25 14:14:37.845801 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jun 25 14:14:37.845807 kernel: Detected PIPT I-cache on CPU0 Jun 25 14:14:37.845813 kernel: CPU features: detected: GIC system register CPU interface Jun 25 14:14:37.845819 kernel: CPU features: detected: Hardware dirty bit management Jun 25 14:14:37.845825 kernel: CPU features: detected: Spectre-v4 Jun 25 14:14:37.845831 kernel: CPU features: detected: Spectre-BHB Jun 25 14:14:37.845837 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 14:14:37.845844 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 14:14:37.845851 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 14:14:37.845857 kernel: alternatives: applying boot alternatives Jun 25 14:14:37.845863 kernel: Fallback order for Node 0: 0 Jun 25 14:14:37.845869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jun 25 14:14:37.845875 kernel: Policy zone: DMA Jun 25 14:14:37.845882 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:14:37.845889 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 14:14:37.845895 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 14:14:37.845901 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 14:14:37.845908 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 14:14:37.845915 kernel: Memory: 2458544K/2572288K available (9984K kernel code, 2108K rwdata, 7720K rodata, 34688K init, 894K bss, 113744K reserved, 0K cma-reserved) Jun 25 14:14:37.845922 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 14:14:37.845928 kernel: trace event string verifier disabled Jun 25 14:14:37.845934 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 14:14:37.845940 kernel: rcu: RCU event tracing is enabled. Jun 25 14:14:37.845946 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 14:14:37.845952 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 14:14:37.845958 kernel: Tracing variant of Tasks RCU enabled. Jun 25 14:14:37.845965 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 14:14:37.845971 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 14:14:37.845977 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 14:14:37.845982 kernel: GICv3: 256 SPIs implemented Jun 25 14:14:37.845990 kernel: GICv3: 0 Extended SPIs implemented Jun 25 14:14:37.845996 kernel: Root IRQ handler: gic_handle_irq Jun 25 14:14:37.846002 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 14:14:37.846008 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 25 14:14:37.846014 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 25 14:14:37.846021 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jun 25 14:14:37.846027 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jun 25 14:14:37.846033 kernel: GICv3: using LPI property table @0x00000000400e0000 Jun 25 14:14:37.846039 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400f0000 Jun 25 14:14:37.846045 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 14:14:37.846051 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:14:37.846058 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 14:14:37.846064 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 14:14:37.846070 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 14:14:37.846076 kernel: arm-pv: using stolen time PV Jun 25 14:14:37.846082 kernel: Console: colour dummy device 80x25 Jun 25 14:14:37.846088 kernel: ACPI: Core revision 20220331 Jun 25 14:14:37.846095 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 14:14:37.846101 kernel: pid_max: default: 32768 minimum: 301 Jun 25 14:14:37.846107 kernel: LSM: Security Framework initializing Jun 25 14:14:37.846113 kernel: SELinux: Initializing. Jun 25 14:14:37.846120 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:14:37.846126 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:14:37.846132 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:14:37.846139 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 14:14:37.846145 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:14:37.846151 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 14:14:37.846157 kernel: rcu: Hierarchical SRCU implementation. Jun 25 14:14:37.846163 kernel: rcu: Max phase no-delay instances is 400. Jun 25 14:14:37.846169 kernel: Platform MSI: ITS@0x8080000 domain created Jun 25 14:14:37.846176 kernel: PCI/MSI: ITS@0x8080000 domain created Jun 25 14:14:37.846182 kernel: Remapping and enabling EFI services. Jun 25 14:14:37.846188 kernel: smp: Bringing up secondary CPUs ... Jun 25 14:14:37.846194 kernel: Detected PIPT I-cache on CPU1 Jun 25 14:14:37.846201 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 25 14:14:37.846207 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040100000 Jun 25 14:14:37.846214 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:14:37.846220 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 14:14:37.846227 kernel: Detected PIPT I-cache on CPU2 Jun 25 14:14:37.846234 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jun 25 14:14:37.846260 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040110000 Jun 25 14:14:37.846267 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:14:37.846273 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jun 25 14:14:37.846280 kernel: Detected PIPT I-cache on CPU3 Jun 25 14:14:37.846291 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jun 25 14:14:37.846300 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040120000 Jun 25 14:14:37.846306 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:14:37.846312 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jun 25 14:14:37.846319 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 14:14:37.846325 kernel: SMP: Total of 4 processors activated. Jun 25 14:14:37.846332 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 14:14:37.846339 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 14:14:37.846346 kernel: CPU features: detected: Common not Private translations Jun 25 14:14:37.846352 kernel: CPU features: detected: CRC32 instructions Jun 25 14:14:37.846359 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 14:14:37.846365 kernel: CPU features: detected: LSE atomic instructions Jun 25 14:14:37.846371 kernel: CPU features: detected: Privileged Access Never Jun 25 14:14:37.846379 kernel: CPU features: detected: RAS Extension Support Jun 25 14:14:37.846386 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 25 14:14:37.846392 kernel: CPU: All CPU(s) started at EL1 Jun 25 14:14:37.846398 kernel: alternatives: applying system-wide alternatives Jun 25 14:14:37.846405 kernel: devtmpfs: initialized Jun 25 14:14:37.846411 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 14:14:37.846418 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 14:14:37.846424 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 14:14:37.846431 kernel: SMBIOS 3.0.0 present. Jun 25 14:14:37.846438 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jun 25 14:14:37.846445 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 14:14:37.846451 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 14:14:37.846458 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 14:14:37.846465 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 14:14:37.846471 kernel: audit: initializing netlink subsys (disabled) Jun 25 14:14:37.846478 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Jun 25 14:14:37.846485 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 14:14:37.846492 kernel: cpuidle: using governor menu Jun 25 14:14:37.846500 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 14:14:37.846506 kernel: ASID allocator initialised with 32768 entries Jun 25 14:14:37.846514 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 14:14:37.846520 kernel: Serial: AMBA PL011 UART driver Jun 25 14:14:37.846527 kernel: KASLR enabled Jun 25 14:14:37.846533 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 14:14:37.846540 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 14:14:37.846547 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 14:14:37.846554 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 14:14:37.846561 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 14:14:37.846568 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 14:14:37.846574 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 14:14:37.846581 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 14:14:37.846587 kernel: ACPI: Added _OSI(Module Device) Jun 25 14:14:37.846594 kernel: ACPI: Added _OSI(Processor Device) Jun 25 14:14:37.846601 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 14:14:37.846607 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 14:14:37.846614 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 14:14:37.846622 kernel: ACPI: Interpreter enabled Jun 25 14:14:37.846628 kernel: ACPI: Using GIC for interrupt routing Jun 25 14:14:37.846635 kernel: ACPI: MCFG table detected, 1 entries Jun 25 14:14:37.846642 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 25 14:14:37.846648 kernel: printk: console [ttyAMA0] enabled Jun 25 14:14:37.846655 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 14:14:37.846779 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 14:14:37.846854 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 25 14:14:37.846920 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 25 14:14:37.846978 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 25 14:14:37.847036 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 25 14:14:37.847045 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 25 14:14:37.847051 kernel: PCI host bridge to bus 0000:00 Jun 25 14:14:37.847121 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 25 14:14:37.847175 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 25 14:14:37.847232 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 25 14:14:37.847297 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 14:14:37.847371 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jun 25 14:14:37.847444 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 14:14:37.847505 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jun 25 14:14:37.847570 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jun 25 14:14:37.847631 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 14:14:37.847692 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 14:14:37.847754 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jun 25 14:14:37.847822 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jun 25 14:14:37.847880 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 25 14:14:37.847935 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 25 14:14:37.847988 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 25 14:14:37.847996 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 25 14:14:37.848006 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 25 14:14:37.848012 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 25 14:14:37.848019 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 25 14:14:37.848026 kernel: iommu: Default domain type: Translated Jun 25 14:14:37.848032 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 14:14:37.848039 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 14:14:37.848045 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 14:14:37.848052 kernel: PTP clock support registered Jun 25 14:14:37.848058 kernel: Registered efivars operations Jun 25 14:14:37.848066 kernel: vgaarb: loaded Jun 25 14:14:37.848072 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 14:14:37.848079 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 14:14:37.848085 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 14:14:37.848092 kernel: pnp: PnP ACPI init Jun 25 14:14:37.848156 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 25 14:14:37.848165 kernel: pnp: PnP ACPI: found 1 devices Jun 25 14:14:37.848172 kernel: NET: Registered PF_INET protocol family Jun 25 14:14:37.848180 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 14:14:37.848187 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 14:14:37.848194 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 14:14:37.848200 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 14:14:37.848206 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 14:14:37.848213 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 14:14:37.848219 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:14:37.848226 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:14:37.848233 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 14:14:37.848240 kernel: PCI: CLS 0 bytes, default 64 Jun 25 14:14:37.848255 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jun 25 14:14:37.848262 kernel: kvm [1]: HYP mode not available Jun 25 14:14:37.848269 kernel: Initialise system trusted keyrings Jun 25 14:14:37.848276 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 14:14:37.848283 kernel: Key type asymmetric registered Jun 25 14:14:37.848290 kernel: Asymmetric key parser 'x509' registered Jun 25 14:14:37.848296 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 14:14:37.848303 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 14:14:37.848311 kernel: io scheduler mq-deadline registered Jun 25 14:14:37.848317 kernel: io scheduler kyber registered Jun 25 14:14:37.848323 kernel: io scheduler bfq registered Jun 25 14:14:37.848330 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 25 14:14:37.848337 kernel: ACPI: button: Power Button [PWRB] Jun 25 14:14:37.848344 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 25 14:14:37.848405 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jun 25 14:14:37.848414 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 14:14:37.848421 kernel: thunder_xcv, ver 1.0 Jun 25 14:14:37.848428 kernel: thunder_bgx, ver 1.0 Jun 25 14:14:37.848435 kernel: nicpf, ver 1.0 Jun 25 14:14:37.848441 kernel: nicvf, ver 1.0 Jun 25 14:14:37.848510 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 14:14:37.848569 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T14:14:37 UTC (1719324877) Jun 25 14:14:37.848578 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 14:14:37.848585 kernel: NET: Registered PF_INET6 protocol family Jun 25 14:14:37.848591 kernel: Segment Routing with IPv6 Jun 25 14:14:37.848600 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 14:14:37.848606 kernel: NET: Registered PF_PACKET protocol family Jun 25 14:14:37.848613 kernel: Key type dns_resolver registered Jun 25 14:14:37.848619 kernel: registered taskstats version 1 Jun 25 14:14:37.848626 kernel: Loading compiled-in X.509 certificates Jun 25 14:14:37.848633 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: 0fa2e892f90caac26ef50b6d7e7f5c106b0c7e83' Jun 25 14:14:37.848639 kernel: Key type .fscrypt registered Jun 25 14:14:37.848646 kernel: Key type fscrypt-provisioning registered Jun 25 14:14:37.848652 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 14:14:37.848660 kernel: ima: Allocated hash algorithm: sha1 Jun 25 14:14:37.848667 kernel: ima: No architecture policies found Jun 25 14:14:37.848674 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 14:14:37.848680 kernel: clk: Disabling unused clocks Jun 25 14:14:37.848687 kernel: Freeing unused kernel memory: 34688K Jun 25 14:14:37.848693 kernel: Run /init as init process Jun 25 14:14:37.848700 kernel: with arguments: Jun 25 14:14:37.848707 kernel: /init Jun 25 14:14:37.848713 kernel: with environment: Jun 25 14:14:37.848720 kernel: HOME=/ Jun 25 14:14:37.848727 kernel: TERM=linux Jun 25 14:14:37.848734 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 14:14:37.848743 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:14:37.848751 systemd[1]: Detected virtualization kvm. Jun 25 14:14:37.848758 systemd[1]: Detected architecture arm64. Jun 25 14:14:37.848765 systemd[1]: Running in initrd. Jun 25 14:14:37.848772 systemd[1]: No hostname configured, using default hostname. Jun 25 14:14:37.848780 systemd[1]: Hostname set to . Jun 25 14:14:37.848795 systemd[1]: Initializing machine ID from VM UUID. Jun 25 14:14:37.848802 systemd[1]: Queued start job for default target initrd.target. Jun 25 14:14:37.848809 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:14:37.848816 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:14:37.848823 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:14:37.848830 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:14:37.848836 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:14:37.848845 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:14:37.848852 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:14:37.848860 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:14:37.848867 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 14:14:37.848874 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 14:14:37.848881 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 14:14:37.848888 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:14:37.848896 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:14:37.848903 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:14:37.848910 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:14:37.848917 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:14:37.848924 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 14:14:37.848931 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 14:14:37.848938 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:14:37.848945 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:14:37.848952 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 14:14:37.848960 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:14:37.848967 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 14:14:37.848974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:14:37.848982 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:14:37.848990 kernel: audit: type=1130 audit(1719324877.845:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.848997 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 14:14:37.849009 systemd-journald[224]: Journal started Jun 25 14:14:37.849048 systemd-journald[224]: Runtime Journal (/run/log/journal/fc54c9c90dc24b2e8446f29d3b5f72ea) is 6.0M, max 48.6M, 42.6M free. Jun 25 14:14:37.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.839659 systemd-modules-load[225]: Inserted module 'overlay' Jun 25 14:14:37.851189 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:14:37.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.852901 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:14:37.856735 kernel: audit: type=1130 audit(1719324877.851:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.860410 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 14:14:37.860443 kernel: Bridge firewalling registered Jun 25 14:14:37.859374 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:14:37.864011 kernel: audit: type=1130 audit(1719324877.860:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.860763 systemd-modules-load[225]: Inserted module 'br_netfilter' Jun 25 14:14:37.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.865000 audit: BPF prog-id=6 op=LOAD Jun 25 14:14:37.862494 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:14:37.871149 kernel: audit: type=1130 audit(1719324877.864:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.871169 kernel: audit: type=1334 audit(1719324877.865:6): prog-id=6 op=LOAD Jun 25 14:14:37.866024 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:14:37.873349 kernel: SCSI subsystem initialized Jun 25 14:14:37.874681 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:14:37.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.879277 kernel: audit: type=1130 audit(1719324877.875:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.881882 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 14:14:37.881903 kernel: device-mapper: uevent: version 1.0.3 Jun 25 14:14:37.881926 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 14:14:37.884168 systemd-modules-load[225]: Inserted module 'dm_multipath' Jun 25 14:14:37.889742 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 14:14:37.890947 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:14:37.895396 kernel: audit: type=1130 audit(1719324877.892:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.893473 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:14:37.898111 systemd-resolved[241]: Positive Trust Anchors: Jun 25 14:14:37.898128 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:14:37.898155 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:14:37.901216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:14:37.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.909275 kernel: audit: type=1130 audit(1719324877.906:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.902579 systemd-resolved[241]: Defaulting to hostname 'linux'. Jun 25 14:14:37.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.906423 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:14:37.915285 kernel: audit: type=1130 audit(1719324877.909:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:37.915307 dracut-cmdline[243]: dracut-dracut-053 Jun 25 14:14:37.915307 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:14:37.910308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:14:37.984268 kernel: Loading iSCSI transport class v2.0-870. Jun 25 14:14:37.992274 kernel: iscsi: registered transport (tcp) Jun 25 14:14:38.008274 kernel: iscsi: registered transport (qla4xxx) Jun 25 14:14:38.008340 kernel: QLogic iSCSI HBA Driver Jun 25 14:14:38.052174 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 14:14:38.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:38.062511 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 14:14:38.118274 kernel: raid6: neonx8 gen() 15755 MB/s Jun 25 14:14:38.135261 kernel: raid6: neonx4 gen() 15643 MB/s Jun 25 14:14:38.152259 kernel: raid6: neonx2 gen() 13245 MB/s Jun 25 14:14:38.169260 kernel: raid6: neonx1 gen() 10475 MB/s Jun 25 14:14:38.186261 kernel: raid6: int64x8 gen() 6984 MB/s Jun 25 14:14:38.203266 kernel: raid6: int64x4 gen() 7330 MB/s Jun 25 14:14:38.220261 kernel: raid6: int64x2 gen() 6123 MB/s Jun 25 14:14:38.237257 kernel: raid6: int64x1 gen() 5047 MB/s Jun 25 14:14:38.237272 kernel: raid6: using algorithm neonx8 gen() 15755 MB/s Jun 25 14:14:38.254271 kernel: raid6: .... xor() 11876 MB/s, rmw enabled Jun 25 14:14:38.254292 kernel: raid6: using neon recovery algorithm Jun 25 14:14:38.259262 kernel: xor: measuring software checksum speed Jun 25 14:14:38.260256 kernel: 8regs : 19864 MB/sec Jun 25 14:14:38.261656 kernel: 32regs : 19678 MB/sec Jun 25 14:14:38.261668 kernel: arm64_neon : 27098 MB/sec Jun 25 14:14:38.261684 kernel: xor: using function: arm64_neon (27098 MB/sec) Jun 25 14:14:38.316264 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jun 25 14:14:38.327277 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:14:38.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:38.328000 audit: BPF prog-id=7 op=LOAD Jun 25 14:14:38.328000 audit: BPF prog-id=8 op=LOAD Jun 25 14:14:38.333440 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:14:38.346170 systemd-udevd[426]: Using default interface naming scheme 'v252'. Jun 25 14:14:38.349558 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:14:38.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:38.352063 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 14:14:38.365521 dracut-pre-trigger[433]: rd.md=0: removing MD RAID activation Jun 25 14:14:38.394445 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:14:38.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:38.405509 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:14:38.440514 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:14:38.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:38.470264 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jun 25 14:14:38.478818 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 14:14:38.478912 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 14:14:38.478922 kernel: GPT:9289727 != 19775487 Jun 25 14:14:38.478931 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 14:14:38.478939 kernel: GPT:9289727 != 19775487 Jun 25 14:14:38.478954 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 14:14:38.478963 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 14:14:38.490265 kernel: BTRFS: device fsid 4f04fb4d-edd3-40b1-b587-481b761003a7 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (490) Jun 25 14:14:38.492267 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (484) Jun 25 14:14:38.493050 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 14:14:38.498398 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 14:14:38.501774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 14:14:38.504599 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 14:14:38.505596 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 14:14:38.521441 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 14:14:38.526925 disk-uuid[499]: Primary Header is updated. Jun 25 14:14:38.526925 disk-uuid[499]: Secondary Entries is updated. Jun 25 14:14:38.526925 disk-uuid[499]: Secondary Header is updated. Jun 25 14:14:38.530264 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 14:14:39.541267 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 14:14:39.541432 disk-uuid[500]: The operation has completed successfully. Jun 25 14:14:39.564425 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 14:14:39.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.564533 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 14:14:39.576504 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 14:14:39.579909 sh[513]: Success Jun 25 14:14:39.600276 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 14:14:39.633830 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 14:14:39.641393 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 14:14:39.643356 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 14:14:39.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.650662 kernel: BTRFS info (device dm-0): first mount of filesystem 4f04fb4d-edd3-40b1-b587-481b761003a7 Jun 25 14:14:39.650700 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:14:39.650710 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 14:14:39.652527 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 14:14:39.652562 kernel: BTRFS info (device dm-0): using free space tree Jun 25 14:14:39.655770 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 14:14:39.657232 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 14:14:39.669435 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 14:14:39.671032 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 14:14:39.678847 kernel: BTRFS info (device vda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:14:39.678895 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:14:39.678912 kernel: BTRFS info (device vda6): using free space tree Jun 25 14:14:39.687168 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 14:14:39.688813 kernel: BTRFS info (device vda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:14:39.692822 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 14:14:39.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.699492 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 14:14:39.770584 ignition[608]: Ignition 2.15.0 Jun 25 14:14:39.770595 ignition[608]: Stage: fetch-offline Jun 25 14:14:39.770639 ignition[608]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:14:39.770647 ignition[608]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:14:39.772480 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:14:39.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.774000 audit: BPF prog-id=9 op=LOAD Jun 25 14:14:39.770745 ignition[608]: parsed url from cmdline: "" Jun 25 14:14:39.770748 ignition[608]: no config URL provided Jun 25 14:14:39.770753 ignition[608]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 14:14:39.770760 ignition[608]: no config at "/usr/lib/ignition/user.ign" Jun 25 14:14:39.770794 ignition[608]: op(1): [started] loading QEMU firmware config module Jun 25 14:14:39.770800 ignition[608]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 14:14:39.781150 ignition[608]: op(1): [finished] loading QEMU firmware config module Jun 25 14:14:39.786480 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:14:39.806470 systemd-networkd[705]: lo: Link UP Jun 25 14:14:39.806485 systemd-networkd[705]: lo: Gained carrier Jun 25 14:14:39.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.807137 systemd-networkd[705]: Enumeration completed Jun 25 14:14:39.807263 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:14:39.807521 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:14:39.807525 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:14:39.808452 systemd[1]: Reached target network.target - Network. Jun 25 14:14:39.809146 systemd-networkd[705]: eth0: Link UP Jun 25 14:14:39.809150 systemd-networkd[705]: eth0: Gained carrier Jun 25 14:14:39.809156 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:14:39.819418 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:14:39.827609 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:14:39.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.829697 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 14:14:39.832717 iscsid[711]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:14:39.832717 iscsid[711]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 14:14:39.832717 iscsid[711]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 14:14:39.832717 iscsid[711]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 14:14:39.832717 iscsid[711]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:14:39.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.844226 iscsid[711]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 14:14:39.833330 systemd-networkd[705]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 14:14:39.835496 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 14:14:39.841492 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 14:14:39.849809 ignition[608]: parsing config with SHA512: 3b3c430336cdb21a7717bba9f974f99b0fb4370c0ed8748d868c1f2c0901574cbafd59bc4c83a0048e968dccf48bab1ba8485b91924ec3d9607c07ea2ab56023 Jun 25 14:14:39.854328 unknown[608]: fetched base config from "system" Jun 25 14:14:39.854341 unknown[608]: fetched user config from "qemu" Jun 25 14:14:39.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.854852 ignition[608]: fetch-offline: fetch-offline passed Jun 25 14:14:39.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.854734 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 14:14:39.854922 ignition[608]: Ignition finished successfully Jun 25 14:14:39.856378 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:14:39.857834 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:14:39.858985 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:14:39.860525 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:14:39.866491 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 14:14:39.867262 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 14:14:39.868124 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 14:14:39.874819 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:14:39.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.882032 ignition[722]: Ignition 2.15.0 Jun 25 14:14:39.882043 ignition[722]: Stage: kargs Jun 25 14:14:39.882172 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:14:39.882182 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:14:39.885048 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 14:14:39.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.883197 ignition[722]: kargs: kargs passed Jun 25 14:14:39.883260 ignition[722]: Ignition finished successfully Jun 25 14:14:39.897481 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 14:14:39.907803 ignition[734]: Ignition 2.15.0 Jun 25 14:14:39.907815 ignition[734]: Stage: disks Jun 25 14:14:39.907932 ignition[734]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:14:39.907942 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:14:39.908973 ignition[734]: disks: disks passed Jun 25 14:14:39.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:39.910019 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 14:14:39.909024 ignition[734]: Ignition finished successfully Jun 25 14:14:39.911172 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 14:14:39.912465 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:14:39.913802 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:14:39.915029 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:14:39.916461 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:14:39.935457 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 14:14:39.946203 systemd-fsck[744]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 14:14:40.040587 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 14:14:40.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:40.053404 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 14:14:40.102278 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 14:14:40.103106 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 14:14:40.104071 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 14:14:40.120378 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:14:40.122144 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 14:14:40.123140 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 14:14:40.123176 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 14:14:40.130128 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (750) Jun 25 14:14:40.123202 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:14:40.134307 kernel: BTRFS info (device vda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:14:40.134394 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:14:40.134420 kernel: BTRFS info (device vda6): using free space tree Jun 25 14:14:40.125872 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 14:14:40.128999 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 14:14:40.139972 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:14:40.170648 initrd-setup-root[774]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 14:14:40.174270 initrd-setup-root[781]: cut: /sysroot/etc/group: No such file or directory Jun 25 14:14:40.178700 initrd-setup-root[788]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 14:14:40.182186 initrd-setup-root[795]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 14:14:40.270189 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 14:14:40.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:40.277442 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 14:14:40.279012 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 14:14:40.283018 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 14:14:40.285165 kernel: BTRFS info (device vda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:14:40.297874 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 14:14:40.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:40.299349 ignition[862]: INFO : Ignition 2.15.0 Jun 25 14:14:40.299349 ignition[862]: INFO : Stage: mount Jun 25 14:14:40.299349 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:14:40.299349 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:14:40.303622 ignition[862]: INFO : mount: mount passed Jun 25 14:14:40.303622 ignition[862]: INFO : Ignition finished successfully Jun 25 14:14:40.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:40.301570 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 14:14:40.307567 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 14:14:40.989424 systemd-networkd[705]: eth0: Gained IPv6LL Jun 25 14:14:41.112499 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:14:41.121261 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (873) Jun 25 14:14:41.123467 kernel: BTRFS info (device vda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:14:41.123490 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:14:41.123499 kernel: BTRFS info (device vda6): using free space tree Jun 25 14:14:41.126099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:14:41.149309 ignition[891]: INFO : Ignition 2.15.0 Jun 25 14:14:41.149309 ignition[891]: INFO : Stage: files Jun 25 14:14:41.151088 ignition[891]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:14:41.151088 ignition[891]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:14:41.151088 ignition[891]: DEBUG : files: compiled without relabeling support, skipping Jun 25 14:14:41.154609 ignition[891]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 14:14:41.154609 ignition[891]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 14:14:41.157873 ignition[891]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 14:14:41.159252 ignition[891]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 14:14:41.159252 ignition[891]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 14:14:41.158637 unknown[891]: wrote ssh authorized keys file for user: core Jun 25 14:14:41.162769 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 14:14:41.162769 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 14:14:41.162769 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:14:41.162769 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 14:14:41.198871 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 14:14:41.257628 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:14:41.259887 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jun 25 14:14:41.525186 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 14:14:41.779368 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:14:41.779368 ignition[891]: INFO : files: op(c): [started] processing unit "containerd.service" Jun 25 14:14:41.782309 ignition[891]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(c): [finished] processing unit "containerd.service" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 14:14:41.783920 ignition[891]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 14:14:41.814550 ignition[891]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 14:14:41.816423 ignition[891]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 14:14:41.816423 ignition[891]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jun 25 14:14:41.816423 ignition[891]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 14:14:41.816423 ignition[891]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:14:41.816423 ignition[891]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:14:41.816423 ignition[891]: INFO : files: files passed Jun 25 14:14:41.816423 ignition[891]: INFO : Ignition finished successfully Jun 25 14:14:41.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.816739 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 14:14:41.826552 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 14:14:41.828095 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 14:14:41.830264 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 14:14:41.830370 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 14:14:41.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.832609 initrd-setup-root-after-ignition[916]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 14:14:41.834597 initrd-setup-root-after-ignition[918]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:14:41.834597 initrd-setup-root-after-ignition[918]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:14:41.837293 initrd-setup-root-after-ignition[922]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:14:41.837836 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:14:41.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.839568 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 14:14:41.841790 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 14:14:41.855002 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 14:14:41.856167 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 14:14:41.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.858313 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 14:14:41.860004 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 14:14:41.861706 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 14:14:41.863266 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 14:14:41.875008 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:14:41.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.884487 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 14:14:41.892337 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:14:41.893215 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:14:41.894662 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 14:14:41.896008 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 14:14:41.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.896132 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:14:41.897423 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 14:14:41.898761 systemd[1]: Stopped target basic.target - Basic System. Jun 25 14:14:41.900765 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 14:14:41.902151 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:14:41.903658 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 14:14:41.905366 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 14:14:41.906984 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:14:41.908705 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 14:14:41.910255 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 14:14:41.912047 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:14:41.913603 systemd[1]: Stopped target swap.target - Swaps. Jun 25 14:14:41.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.914814 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 14:14:41.914935 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:14:41.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.916599 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:14:41.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.917788 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 14:14:41.917899 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 14:14:41.919314 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 14:14:41.919413 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:14:41.920870 systemd[1]: Stopped target paths.target - Path Units. Jun 25 14:14:41.922307 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 14:14:41.926314 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:14:41.927196 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 14:14:41.928778 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 14:14:41.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.930832 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 14:14:41.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.930964 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:14:41.932162 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 14:14:41.932268 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 14:14:41.943561 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 14:14:41.944679 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 14:14:41.946286 iscsid[711]: iscsid shutting down. Jun 25 14:14:41.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.945760 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 14:14:41.945912 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:14:41.948060 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 14:14:41.948899 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 14:14:41.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.949039 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:14:41.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.951859 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 14:14:41.951982 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:14:41.954740 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 14:14:41.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.955167 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 14:14:41.956973 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 14:14:41.959634 ignition[936]: INFO : Ignition 2.15.0 Jun 25 14:14:41.959634 ignition[936]: INFO : Stage: umount Jun 25 14:14:41.959634 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:14:41.959634 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:14:41.959634 ignition[936]: INFO : umount: umount passed Jun 25 14:14:41.959634 ignition[936]: INFO : Ignition finished successfully Jun 25 14:14:41.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.957045 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:14:41.958184 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:14:41.960584 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 14:14:41.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.960684 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:14:41.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.962414 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 14:14:41.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.962898 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 14:14:41.962976 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 14:14:41.964520 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 14:14:41.964605 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 14:14:41.966871 systemd[1]: Stopped target network.target - Network. Jun 25 14:14:41.971725 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 14:14:41.971776 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:14:41.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.973110 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 14:14:41.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.973155 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 14:14:41.974697 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 14:14:41.974736 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 14:14:41.977157 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 14:14:41.977204 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 14:14:41.979160 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 14:14:41.996000 audit: BPF prog-id=6 op=UNLOAD Jun 25 14:14:41.981094 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 14:14:41.984289 systemd-networkd[705]: eth0: DHCPv6 lease lost Jun 25 14:14:41.998000 audit: BPF prog-id=9 op=UNLOAD Jun 25 14:14:41.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.986808 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 14:14:42.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.986913 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 14:14:41.989086 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 14:14:42.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.989177 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 14:14:42.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:41.990636 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 14:14:41.990668 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:14:41.996428 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 14:14:41.997877 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 14:14:41.997943 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:14:41.999361 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 14:14:41.999399 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:14:42.001229 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 14:14:42.001284 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 14:14:42.002815 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 14:14:42.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.002856 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:14:42.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.008973 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:14:42.012855 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 14:14:42.012935 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 14:14:42.013523 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 14:14:42.013649 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:14:42.016071 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 14:14:42.016161 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 14:14:42.028782 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 14:14:42.028839 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 14:14:42.030507 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 14:14:42.030539 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:14:42.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.032127 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 14:14:42.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.032175 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:14:42.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.033729 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 14:14:42.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.033781 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 14:14:42.035184 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 14:14:42.035224 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:14:42.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.036652 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 14:14:42.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.036693 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 14:14:42.039437 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 14:14:42.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.040484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 14:14:42.040559 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:14:42.042626 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 14:14:42.042750 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 14:14:42.045020 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 14:14:42.045104 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 14:14:42.046322 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 14:14:42.048608 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 14:14:42.055040 systemd[1]: Switching root. Jun 25 14:14:42.059000 audit: BPF prog-id=8 op=UNLOAD Jun 25 14:14:42.059000 audit: BPF prog-id=7 op=UNLOAD Jun 25 14:14:42.059000 audit: BPF prog-id=5 op=UNLOAD Jun 25 14:14:42.059000 audit: BPF prog-id=4 op=UNLOAD Jun 25 14:14:42.059000 audit: BPF prog-id=3 op=UNLOAD Jun 25 14:14:42.076841 systemd-journald[224]: Journal stopped Jun 25 14:14:42.752135 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jun 25 14:14:42.752193 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 14:14:42.752208 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 14:14:42.752222 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 14:14:42.752231 kernel: SELinux: policy capability open_perms=1 Jun 25 14:14:42.752256 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 14:14:42.752267 kernel: SELinux: policy capability always_check_network=0 Jun 25 14:14:42.752276 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 14:14:42.752285 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 14:14:42.752294 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 14:14:42.752304 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 14:14:42.752315 kernel: kauditd_printk_skb: 73 callbacks suppressed Jun 25 14:14:42.752325 kernel: audit: type=1403 audit(1719324882.182:84): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 14:14:42.752336 systemd[1]: Successfully loaded SELinux policy in 40.410ms. Jun 25 14:14:42.752353 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.306ms. Jun 25 14:14:42.752364 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:14:42.752376 systemd[1]: Detected virtualization kvm. Jun 25 14:14:42.752386 systemd[1]: Detected architecture arm64. Jun 25 14:14:42.752398 systemd[1]: Detected first boot. Jun 25 14:14:42.752408 systemd[1]: Initializing machine ID from VM UUID. Jun 25 14:14:42.752419 systemd[1]: Populated /etc with preset unit settings. Jun 25 14:14:42.752431 systemd[1]: Queued start job for default target multi-user.target. Jun 25 14:14:42.752442 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 14:14:42.752453 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 14:14:42.752464 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 14:14:42.752474 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 14:14:42.752484 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 14:14:42.752496 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 14:14:42.752506 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 14:14:42.752516 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 14:14:42.752526 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 14:14:42.752537 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:14:42.752547 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 14:14:42.752558 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 14:14:42.752568 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 14:14:42.752580 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 14:14:42.752590 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:14:42.752601 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:14:42.752611 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:14:42.752621 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:14:42.752631 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 14:14:42.752642 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 14:14:42.752653 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 14:14:42.752666 kernel: audit: type=1400 audit(1719324882.655:85): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jun 25 14:14:42.752676 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 14:14:42.752687 kernel: audit: type=1335 audit(1719324882.655:86): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jun 25 14:14:42.752697 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 14:14:42.752707 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 14:14:42.752718 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:14:42.752729 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:14:42.752740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:14:42.752750 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 14:14:42.752762 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 14:14:42.752780 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 14:14:42.752793 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 14:14:42.752803 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 14:14:42.752813 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 14:14:42.752823 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 14:14:42.752834 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 14:14:42.752844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:14:42.752856 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:14:42.752867 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 14:14:42.752878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:14:42.752889 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:14:42.752943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:14:42.752958 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 14:14:42.752969 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:14:42.752980 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 14:14:42.752991 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 25 14:14:42.753003 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jun 25 14:14:42.753014 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:14:42.753024 kernel: loop: module loaded Jun 25 14:14:42.753036 kernel: fuse: init (API version 7.37) Jun 25 14:14:42.753047 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:14:42.753059 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 14:14:42.753073 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 14:14:42.753083 kernel: audit: type=1305 audit(1719324882.750:87): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:14:42.753095 systemd-journald[1063]: Journal started Jun 25 14:14:42.753140 systemd-journald[1063]: Runtime Journal (/run/log/journal/fc54c9c90dc24b2e8446f29d3b5f72ea) is 6.0M, max 48.6M, 42.6M free. Jun 25 14:14:42.655000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jun 25 14:14:42.655000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jun 25 14:14:42.750000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:14:42.750000 audit[1063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd49f4a00 a2=4000 a3=1 items=0 ppid=1 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:42.764449 kernel: audit: type=1300 audit(1719324882.750:87): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd49f4a00 a2=4000 a3=1 items=0 ppid=1 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:42.764518 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:14:42.764539 kernel: audit: type=1327 audit(1719324882.750:87): proctitle="/usr/lib/systemd/systemd-journald" Jun 25 14:14:42.764551 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:14:42.764572 kernel: ACPI: bus type drm_connector registered Jun 25 14:14:42.764585 kernel: audit: type=1130 audit(1719324882.764:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.750000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 14:14:42.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.765017 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 14:14:42.767525 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 14:14:42.770684 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 14:14:42.771687 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 14:14:42.772852 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 14:14:42.774023 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 14:14:42.775127 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:14:42.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.776465 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 14:14:42.776640 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 14:14:42.779299 kernel: audit: type=1130 audit(1719324882.776:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.779723 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:14:42.779903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:14:42.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.783341 kernel: audit: type=1130 audit(1719324882.779:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.783382 kernel: audit: type=1131 audit(1719324882.779:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.784762 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 14:14:42.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.786052 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:14:42.786231 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:14:42.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.787731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:14:42.787906 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:14:42.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.789099 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 14:14:42.789518 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 14:14:42.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.790970 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:14:42.791156 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:14:42.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.792605 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:14:42.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.794125 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 14:14:42.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.795545 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 14:14:42.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.796971 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 14:14:42.802574 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 14:14:42.805021 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 14:14:42.806043 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 14:14:42.809323 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 14:14:42.812166 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 14:14:42.813817 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:14:42.815552 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 14:14:42.817070 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:14:42.818884 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:14:42.822364 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 14:14:42.825498 systemd-journald[1063]: Time spent on flushing to /var/log/journal/fc54c9c90dc24b2e8446f29d3b5f72ea is 13.436ms for 950 entries. Jun 25 14:14:42.825498 systemd-journald[1063]: System Journal (/var/log/journal/fc54c9c90dc24b2e8446f29d3b5f72ea) is 8.0M, max 195.6M, 187.6M free. Jun 25 14:14:42.851531 systemd-journald[1063]: Received client request to flush runtime journal. Jun 25 14:14:42.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.828000 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:14:42.829149 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 14:14:42.852813 udevadm[1111]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 14:14:42.830327 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 14:14:42.833012 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 14:14:42.834330 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 14:14:42.835596 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 14:14:42.841785 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:14:42.852659 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 14:14:42.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:42.854038 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 14:14:42.860553 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:14:42.877054 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:14:42.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.248497 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 14:14:43.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.257532 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:14:43.289777 systemd-udevd[1131]: Using default interface naming scheme 'v252'. Jun 25 14:14:43.301576 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:14:43.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.320439 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:14:43.324831 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jun 25 14:14:43.333625 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 14:14:43.337545 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1139) Jun 25 14:14:43.363339 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1136) Jun 25 14:14:43.368534 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 14:14:43.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.402609 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 14:14:43.432738 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 14:14:43.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.436946 systemd-networkd[1138]: lo: Link UP Jun 25 14:14:43.436958 systemd-networkd[1138]: lo: Gained carrier Jun 25 14:14:43.437379 systemd-networkd[1138]: Enumeration completed Jun 25 14:14:43.437483 systemd-networkd[1138]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:14:43.437486 systemd-networkd[1138]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:14:43.438413 systemd-networkd[1138]: eth0: Link UP Jun 25 14:14:43.438416 systemd-networkd[1138]: eth0: Gained carrier Jun 25 14:14:43.438425 systemd-networkd[1138]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:14:43.441534 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 14:14:43.442734 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:14:43.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.445787 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 14:14:43.455377 systemd-networkd[1138]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 14:14:43.457293 lvm[1166]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:14:43.486282 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 14:14:43.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.487201 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:14:43.499535 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 14:14:43.503253 lvm[1169]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:14:43.529322 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 14:14:43.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.530298 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:14:43.531102 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 14:14:43.531133 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:14:43.531963 systemd[1]: Reached target machines.target - Containers. Jun 25 14:14:43.545547 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 14:14:43.546549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:14:43.546648 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:14:43.548199 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 14:14:43.550551 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 14:14:43.553344 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 14:14:43.556037 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 14:14:43.557619 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1172 (bootctl) Jun 25 14:14:43.559313 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 14:14:43.563984 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 14:14:43.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.568296 kernel: loop0: detected capacity change from 0 to 113264 Jun 25 14:14:43.584272 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 14:14:43.625055 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 14:14:43.625899 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 14:14:43.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.632330 systemd-fsck[1180]: fsck.fat 4.2 (2021-01-31) Jun 25 14:14:43.632330 systemd-fsck[1180]: /dev/vda1: 242 files, 114659/258078 clusters Jun 25 14:14:43.635108 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 14:14:43.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.638302 kernel: loop1: detected capacity change from 0 to 59648 Jun 25 14:14:43.642494 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 14:14:43.654006 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 14:14:43.662295 kernel: loop2: detected capacity change from 0 to 193208 Jun 25 14:14:43.663828 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 14:14:43.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.687264 kernel: loop3: detected capacity change from 0 to 113264 Jun 25 14:14:43.694274 kernel: loop4: detected capacity change from 0 to 59648 Jun 25 14:14:43.702277 kernel: loop5: detected capacity change from 0 to 193208 Jun 25 14:14:43.709624 (sd-sysext)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 14:14:43.710739 (sd-sysext)[1191]: Merged extensions into '/usr'. Jun 25 14:14:43.712473 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 14:14:43.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.727638 systemd[1]: Starting ensure-sysext.service... Jun 25 14:14:43.730258 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:14:43.741816 systemd[1]: Reloading. Jun 25 14:14:43.744109 systemd-tmpfiles[1198]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 14:14:43.744991 systemd-tmpfiles[1198]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 14:14:43.745263 systemd-tmpfiles[1198]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 14:14:43.745900 systemd-tmpfiles[1198]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 14:14:43.798031 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 14:14:43.877506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:14:43.925295 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 14:14:43.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.938168 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:14:43.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.942094 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:14:43.944653 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 14:14:43.947474 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 14:14:43.950546 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:14:43.953666 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 14:14:43.956130 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 14:14:43.961864 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:14:43.963667 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:14:43.967009 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:14:43.969000 audit[1272]: SYSTEM_BOOT pid=1272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.970138 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:14:43.971337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:14:43.971513 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:14:43.973263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:14:43.973455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:14:43.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.975129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:14:43.975347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:14:43.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.977142 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:14:43.977374 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:14:43.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.980697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:14:43.980877 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:14:43.982921 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:14:43.988648 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:14:43.991461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:14:43.994021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:14:43.995012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:14:43.995217 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:14:43.997122 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 14:14:43.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.998623 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:14:43.998786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:14:43.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.000444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:14:44.000591 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:14:44.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.002672 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:14:44.002853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:14:44.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.004838 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:14:44.004956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:14:44.013695 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 14:14:44.015472 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 14:14:44.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.017199 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 14:14:44.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.021935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:14:44.023881 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:14:44.026637 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:14:44.029544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:14:44.032195 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:14:44.033577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:14:44.033753 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:14:44.033900 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 14:14:44.034782 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:14:44.034974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:14:44.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.036976 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 14:14:44.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.038757 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:14:44.038936 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:14:44.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.040602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:14:44.040796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:14:44.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:44.043603 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:14:43.641604 systemd-timesyncd[1269]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 14:14:43.673859 systemd-journald[1063]: Time jumped backwards, rotating. Jun 25 14:14:43.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:43.659000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 14:14:43.659000 audit[1308]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffff23cfc0 a2=420 a3=0 items=0 ppid=1261 pid=1308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:43.659000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 14:14:43.674154 augenrules[1308]: No rules Jun 25 14:14:43.641664 systemd-timesyncd[1269]: Initial clock synchronization to Tue 2024-06-25 14:14:43.641505 UTC. Jun 25 14:14:43.646499 systemd[1]: Finished ensure-sysext.service. Jun 25 14:14:43.647566 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 14:14:43.651433 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:14:43.651605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:14:43.653058 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 14:14:43.653946 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:14:43.661941 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:14:43.665431 systemd-resolved[1265]: Positive Trust Anchors: Jun 25 14:14:43.665440 systemd-resolved[1265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:14:43.665467 systemd-resolved[1265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:14:43.674024 systemd-resolved[1265]: Defaulting to hostname 'linux'. Jun 25 14:14:43.675854 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:14:43.676993 systemd[1]: Reached target network.target - Network. Jun 25 14:14:43.677725 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:14:43.678554 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:14:43.679388 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 14:14:43.680237 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 14:14:43.681243 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 14:14:43.682162 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 14:14:43.683263 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 14:14:43.684131 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 14:14:43.684161 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:14:43.684813 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:14:43.686025 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 14:14:43.688484 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 14:14:43.690309 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 14:14:43.691199 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:14:43.692912 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 14:14:43.693692 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:14:43.694459 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:14:43.695292 systemd[1]: System is tainted: cgroupsv1 Jun 25 14:14:43.695337 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:14:43.695356 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:14:43.696779 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 14:14:43.699038 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 14:14:43.701562 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 14:14:43.704275 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 14:14:43.705175 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 14:14:43.707472 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 14:14:43.710096 jq[1324]: false Jun 25 14:14:43.710264 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 14:14:43.712607 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 14:14:43.715381 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 14:14:43.718748 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 14:14:43.719790 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:14:43.719916 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 14:14:43.721635 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 14:14:43.728322 jq[1338]: true Jun 25 14:14:43.723937 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 14:14:43.732217 extend-filesystems[1325]: Found loop3 Jun 25 14:14:43.732217 extend-filesystems[1325]: Found loop4 Jun 25 14:14:43.732217 extend-filesystems[1325]: Found loop5 Jun 25 14:14:43.732217 extend-filesystems[1325]: Found vda Jun 25 14:14:43.732217 extend-filesystems[1325]: Found vda1 Jun 25 14:14:43.732217 extend-filesystems[1325]: Found vda2 Jun 25 14:14:43.732217 extend-filesystems[1325]: Found vda3 Jun 25 14:14:43.732217 extend-filesystems[1325]: Found usr Jun 25 14:14:43.732217 extend-filesystems[1325]: Found vda4 Jun 25 14:14:43.732217 extend-filesystems[1325]: Found vda6 Jun 25 14:14:43.732217 extend-filesystems[1325]: Found vda7 Jun 25 14:14:43.732217 extend-filesystems[1325]: Found vda9 Jun 25 14:14:43.732217 extend-filesystems[1325]: Checking size of /dev/vda9 Jun 25 14:14:43.727243 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 14:14:43.732323 dbus-daemon[1323]: [system] SELinux support is enabled Jun 25 14:14:43.732457 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 14:14:43.732873 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 14:14:43.770431 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1143) Jun 25 14:14:43.770554 tar[1344]: linux-arm64/helm Jun 25 14:14:43.736697 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 14:14:43.770872 jq[1349]: true Jun 25 14:14:43.736955 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 14:14:43.743290 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 14:14:43.743349 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 14:14:43.748858 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 14:14:43.748879 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 14:14:43.757096 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 14:14:43.757356 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 14:14:43.779741 extend-filesystems[1325]: Resized partition /dev/vda9 Jun 25 14:14:43.783507 extend-filesystems[1366]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 14:14:43.788182 update_engine[1337]: I0625 14:14:43.787214 1337 main.cc:92] Flatcar Update Engine starting Jun 25 14:14:43.791783 update_engine[1337]: I0625 14:14:43.791504 1337 update_check_scheduler.cc:74] Next update check in 7m15s Jun 25 14:14:43.791773 systemd[1]: Started update-engine.service - Update Engine. Jun 25 14:14:43.795541 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 14:14:43.794232 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 14:14:43.801202 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 14:14:43.822728 systemd-logind[1335]: Watching system buttons on /dev/input/event0 (Power Button) Jun 25 14:14:43.824730 systemd-logind[1335]: New seat seat0. Jun 25 14:14:43.831273 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 14:14:43.850909 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 14:14:43.865296 bash[1374]: Updated "/home/core/.ssh/authorized_keys" Jun 25 14:14:43.866298 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 14:14:43.867655 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 14:14:43.867998 extend-filesystems[1366]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 14:14:43.867998 extend-filesystems[1366]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 14:14:43.867998 extend-filesystems[1366]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 14:14:43.872230 extend-filesystems[1325]: Resized filesystem in /dev/vda9 Jun 25 14:14:43.869633 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 14:14:43.869861 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 14:14:43.895430 locksmithd[1375]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 14:14:43.998441 containerd[1350]: time="2024-06-25T14:14:43.998343695Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 14:14:44.021533 containerd[1350]: time="2024-06-25T14:14:44.021469975Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 14:14:44.021533 containerd[1350]: time="2024-06-25T14:14:44.021533215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:14:44.022939 containerd[1350]: time="2024-06-25T14:14:44.022874095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:14:44.022939 containerd[1350]: time="2024-06-25T14:14:44.022931295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:14:44.023272 containerd[1350]: time="2024-06-25T14:14:44.023232775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:14:44.023272 containerd[1350]: time="2024-06-25T14:14:44.023263415Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 14:14:44.023355 containerd[1350]: time="2024-06-25T14:14:44.023340335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 14:14:44.023444 containerd[1350]: time="2024-06-25T14:14:44.023403815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:14:44.023444 containerd[1350]: time="2024-06-25T14:14:44.023437255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 14:14:44.023519 containerd[1350]: time="2024-06-25T14:14:44.023505415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:14:44.023731 containerd[1350]: time="2024-06-25T14:14:44.023704135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 14:14:44.023757 containerd[1350]: time="2024-06-25T14:14:44.023733615Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 14:14:44.023757 containerd[1350]: time="2024-06-25T14:14:44.023746055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:14:44.023947 containerd[1350]: time="2024-06-25T14:14:44.023926015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:14:44.023973 containerd[1350]: time="2024-06-25T14:14:44.023947775Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 14:14:44.024022 containerd[1350]: time="2024-06-25T14:14:44.024006935Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 14:14:44.024043 containerd[1350]: time="2024-06-25T14:14:44.024028135Z" level=info msg="metadata content store policy set" policy=shared Jun 25 14:14:44.027448 containerd[1350]: time="2024-06-25T14:14:44.027404015Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 14:14:44.027448 containerd[1350]: time="2024-06-25T14:14:44.027452415Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 14:14:44.027573 containerd[1350]: time="2024-06-25T14:14:44.027467495Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 14:14:44.027573 containerd[1350]: time="2024-06-25T14:14:44.027501055Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 14:14:44.027573 containerd[1350]: time="2024-06-25T14:14:44.027515175Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 14:14:44.027573 containerd[1350]: time="2024-06-25T14:14:44.027526975Z" level=info msg="NRI interface is disabled by configuration." Jun 25 14:14:44.027573 containerd[1350]: time="2024-06-25T14:14:44.027539695Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 14:14:44.027698 containerd[1350]: time="2024-06-25T14:14:44.027679055Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 14:14:44.027776 containerd[1350]: time="2024-06-25T14:14:44.027703775Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 14:14:44.027776 containerd[1350]: time="2024-06-25T14:14:44.027717295Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 14:14:44.027776 containerd[1350]: time="2024-06-25T14:14:44.027730815Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 14:14:44.027776 containerd[1350]: time="2024-06-25T14:14:44.027745015Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 14:14:44.027776 containerd[1350]: time="2024-06-25T14:14:44.027760935Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 14:14:44.027776 containerd[1350]: time="2024-06-25T14:14:44.027773695Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 14:14:44.027887 containerd[1350]: time="2024-06-25T14:14:44.027787135Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 14:14:44.027887 containerd[1350]: time="2024-06-25T14:14:44.027802055Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 14:14:44.027887 containerd[1350]: time="2024-06-25T14:14:44.027816335Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 14:14:44.027887 containerd[1350]: time="2024-06-25T14:14:44.027829495Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 14:14:44.027887 containerd[1350]: time="2024-06-25T14:14:44.027841335Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 14:14:44.027989 containerd[1350]: time="2024-06-25T14:14:44.027952815Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 14:14:44.028269 containerd[1350]: time="2024-06-25T14:14:44.028248935Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 14:14:44.028307 containerd[1350]: time="2024-06-25T14:14:44.028289455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028307 containerd[1350]: time="2024-06-25T14:14:44.028304175Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 14:14:44.028355 containerd[1350]: time="2024-06-25T14:14:44.028326815Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 14:14:44.028458 containerd[1350]: time="2024-06-25T14:14:44.028442855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028486 containerd[1350]: time="2024-06-25T14:14:44.028460735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028486 containerd[1350]: time="2024-06-25T14:14:44.028474455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028523 containerd[1350]: time="2024-06-25T14:14:44.028485855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028523 containerd[1350]: time="2024-06-25T14:14:44.028498255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028523 containerd[1350]: time="2024-06-25T14:14:44.028511495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028579 containerd[1350]: time="2024-06-25T14:14:44.028523695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028579 containerd[1350]: time="2024-06-25T14:14:44.028534895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028579 containerd[1350]: time="2024-06-25T14:14:44.028547375Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 14:14:44.028691 containerd[1350]: time="2024-06-25T14:14:44.028672735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028713 containerd[1350]: time="2024-06-25T14:14:44.028702455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028736 containerd[1350]: time="2024-06-25T14:14:44.028716375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028736 containerd[1350]: time="2024-06-25T14:14:44.028729975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028781 containerd[1350]: time="2024-06-25T14:14:44.028742015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028781 containerd[1350]: time="2024-06-25T14:14:44.028756055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028781 containerd[1350]: time="2024-06-25T14:14:44.028767935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.028781 containerd[1350]: time="2024-06-25T14:14:44.028778735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 14:14:44.029080 containerd[1350]: time="2024-06-25T14:14:44.029027415Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 14:14:44.029500 containerd[1350]: time="2024-06-25T14:14:44.029090215Z" level=info msg="Connect containerd service" Jun 25 14:14:44.029500 containerd[1350]: time="2024-06-25T14:14:44.029123855Z" level=info msg="using legacy CRI server" Jun 25 14:14:44.029500 containerd[1350]: time="2024-06-25T14:14:44.029131135Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 14:14:44.029500 containerd[1350]: time="2024-06-25T14:14:44.029268255Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 14:14:44.029959 containerd[1350]: time="2024-06-25T14:14:44.029934495Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:14:44.030747 containerd[1350]: time="2024-06-25T14:14:44.030701975Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 14:14:44.030747 containerd[1350]: time="2024-06-25T14:14:44.030743935Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 14:14:44.030804 containerd[1350]: time="2024-06-25T14:14:44.030756215Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 14:14:44.030804 containerd[1350]: time="2024-06-25T14:14:44.030766815Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 14:14:44.031322 containerd[1350]: time="2024-06-25T14:14:44.031299815Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 14:14:44.031363 containerd[1350]: time="2024-06-25T14:14:44.031345735Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 14:14:44.031475 containerd[1350]: time="2024-06-25T14:14:44.031447855Z" level=info msg="Start subscribing containerd event" Jun 25 14:14:44.031512 containerd[1350]: time="2024-06-25T14:14:44.031486295Z" level=info msg="Start recovering state" Jun 25 14:14:44.031565 containerd[1350]: time="2024-06-25T14:14:44.031547575Z" level=info msg="Start event monitor" Jun 25 14:14:44.031565 containerd[1350]: time="2024-06-25T14:14:44.031562175Z" level=info msg="Start snapshots syncer" Jun 25 14:14:44.031614 containerd[1350]: time="2024-06-25T14:14:44.031571415Z" level=info msg="Start cni network conf syncer for default" Jun 25 14:14:44.031614 containerd[1350]: time="2024-06-25T14:14:44.031578735Z" level=info msg="Start streaming server" Jun 25 14:14:44.031715 containerd[1350]: time="2024-06-25T14:14:44.031696575Z" level=info msg="containerd successfully booted in 0.035705s" Jun 25 14:14:44.031803 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 14:14:44.163322 tar[1344]: linux-arm64/LICENSE Jun 25 14:14:44.163433 tar[1344]: linux-arm64/README.md Jun 25 14:14:44.180300 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 14:14:44.676003 systemd-networkd[1138]: eth0: Gained IPv6LL Jun 25 14:14:44.678074 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 14:14:44.679347 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 14:14:44.686429 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 14:14:44.689029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:14:44.691402 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 14:14:44.701855 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 14:14:44.702124 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 14:14:44.703389 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 14:14:44.712927 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 14:14:45.252774 sshd_keygen[1358]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 14:14:45.269164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:14:45.275579 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 14:14:45.283326 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 14:14:45.288606 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 14:14:45.288833 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 14:14:45.291683 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 14:14:45.301020 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 14:14:45.311426 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 14:14:45.314569 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 14:14:45.315887 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 14:14:45.317117 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 14:14:45.319979 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 14:14:45.328070 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 14:14:45.328336 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 14:14:45.329615 systemd[1]: Startup finished in 5.025s (kernel) + 3.596s (userspace) = 8.621s. Jun 25 14:14:45.770512 kubelet[1426]: E0625 14:14:45.770433 1426 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:14:45.773013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:14:45.773171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:14:50.157286 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 14:14:50.171315 systemd[1]: Started sshd@0-10.0.0.23:22-10.0.0.1:56230.service - OpenSSH per-connection server daemon (10.0.0.1:56230). Jun 25 14:14:50.252774 sshd[1449]: Accepted publickey for core from 10.0.0.1 port 56230 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:14:50.255195 sshd[1449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:14:50.269120 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 14:14:50.276265 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 14:14:50.278971 systemd-logind[1335]: New session 1 of user core. Jun 25 14:14:50.287569 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 14:14:50.296249 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 14:14:50.300786 (systemd)[1454]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:14:50.363728 systemd[1454]: Queued start job for default target default.target. Jun 25 14:14:50.363974 systemd[1454]: Reached target paths.target - Paths. Jun 25 14:14:50.363989 systemd[1454]: Reached target sockets.target - Sockets. Jun 25 14:14:50.363998 systemd[1454]: Reached target timers.target - Timers. Jun 25 14:14:50.364020 systemd[1454]: Reached target basic.target - Basic System. Jun 25 14:14:50.364145 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 14:14:50.364874 systemd[1454]: Reached target default.target - Main User Target. Jun 25 14:14:50.364943 systemd[1454]: Startup finished in 57ms. Jun 25 14:14:50.372203 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 14:14:50.431252 systemd[1]: Started sshd@1-10.0.0.23:22-10.0.0.1:56238.service - OpenSSH per-connection server daemon (10.0.0.1:56238). Jun 25 14:14:50.460752 sshd[1463]: Accepted publickey for core from 10.0.0.1 port 56238 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:14:50.462052 sshd[1463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:14:50.466168 systemd-logind[1335]: New session 2 of user core. Jun 25 14:14:50.476190 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 14:14:50.530680 sshd[1463]: pam_unix(sshd:session): session closed for user core Jun 25 14:14:50.546304 systemd[1]: Started sshd@2-10.0.0.23:22-10.0.0.1:56250.service - OpenSSH per-connection server daemon (10.0.0.1:56250). Jun 25 14:14:50.546792 systemd[1]: sshd@1-10.0.0.23:22-10.0.0.1:56238.service: Deactivated successfully. Jun 25 14:14:50.547861 systemd-logind[1335]: Session 2 logged out. Waiting for processes to exit. Jun 25 14:14:50.547931 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 14:14:50.548748 systemd-logind[1335]: Removed session 2. Jun 25 14:14:50.576193 sshd[1468]: Accepted publickey for core from 10.0.0.1 port 56250 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:14:50.577804 sshd[1468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:14:50.581196 systemd-logind[1335]: New session 3 of user core. Jun 25 14:14:50.599242 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 14:14:50.649220 sshd[1468]: pam_unix(sshd:session): session closed for user core Jun 25 14:14:50.664311 systemd[1]: Started sshd@3-10.0.0.23:22-10.0.0.1:56260.service - OpenSSH per-connection server daemon (10.0.0.1:56260). Jun 25 14:14:50.664885 systemd[1]: sshd@2-10.0.0.23:22-10.0.0.1:56250.service: Deactivated successfully. Jun 25 14:14:50.665917 systemd-logind[1335]: Session 3 logged out. Waiting for processes to exit. Jun 25 14:14:50.665975 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 14:14:50.666768 systemd-logind[1335]: Removed session 3. Jun 25 14:14:50.696803 sshd[1475]: Accepted publickey for core from 10.0.0.1 port 56260 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:14:50.697986 sshd[1475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:14:50.701537 systemd-logind[1335]: New session 4 of user core. Jun 25 14:14:50.712260 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 14:14:50.766352 sshd[1475]: pam_unix(sshd:session): session closed for user core Jun 25 14:14:50.775291 systemd[1]: Started sshd@4-10.0.0.23:22-10.0.0.1:56262.service - OpenSSH per-connection server daemon (10.0.0.1:56262). Jun 25 14:14:50.775878 systemd[1]: sshd@3-10.0.0.23:22-10.0.0.1:56260.service: Deactivated successfully. Jun 25 14:14:50.776919 systemd-logind[1335]: Session 4 logged out. Waiting for processes to exit. Jun 25 14:14:50.776968 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 14:14:50.779965 systemd-logind[1335]: Removed session 4. Jun 25 14:14:50.806632 sshd[1482]: Accepted publickey for core from 10.0.0.1 port 56262 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:14:50.807868 sshd[1482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:14:50.811532 systemd-logind[1335]: New session 5 of user core. Jun 25 14:14:50.821235 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 14:14:50.881348 sudo[1488]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 14:14:50.881609 sudo[1488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:14:50.897159 sudo[1488]: pam_unix(sudo:session): session closed for user root Jun 25 14:14:50.899250 sshd[1482]: pam_unix(sshd:session): session closed for user core Jun 25 14:14:50.912474 systemd[1]: Started sshd@5-10.0.0.23:22-10.0.0.1:56272.service - OpenSSH per-connection server daemon (10.0.0.1:56272). Jun 25 14:14:50.913082 systemd[1]: sshd@4-10.0.0.23:22-10.0.0.1:56262.service: Deactivated successfully. Jun 25 14:14:50.914108 systemd-logind[1335]: Session 5 logged out. Waiting for processes to exit. Jun 25 14:14:50.914169 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 14:14:50.915048 systemd-logind[1335]: Removed session 5. Jun 25 14:14:50.944812 sshd[1490]: Accepted publickey for core from 10.0.0.1 port 56272 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:14:50.946742 sshd[1490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:14:50.950462 systemd-logind[1335]: New session 6 of user core. Jun 25 14:14:50.960274 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 14:14:51.013434 sudo[1497]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 14:14:51.013684 sudo[1497]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:14:51.016848 sudo[1497]: pam_unix(sudo:session): session closed for user root Jun 25 14:14:51.022033 sudo[1496]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 14:14:51.022596 sudo[1496]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:14:51.040294 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 14:14:51.040000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:14:51.042064 auditctl[1500]: No rules Jun 25 14:14:51.042316 kernel: kauditd_printk_skb: 64 callbacks suppressed Jun 25 14:14:51.042552 kernel: audit: type=1305 audit(1719324891.040:154): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:14:51.042590 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 14:14:51.042834 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 14:14:51.040000 audit[1500]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe9383920 a2=420 a3=0 items=0 ppid=1 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.044805 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:14:51.046197 kernel: audit: type=1300 audit(1719324891.040:154): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe9383920 a2=420 a3=0 items=0 ppid=1 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.046266 kernel: audit: type=1327 audit(1719324891.040:154): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:14:51.040000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:14:51.046916 kernel: audit: type=1131 audit(1719324891.041:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.066008 augenrules[1518]: No rules Jun 25 14:14:51.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.066784 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:14:51.068000 audit[1496]: USER_END pid=1496 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.069108 sudo[1496]: pam_unix(sudo:session): session closed for user root Jun 25 14:14:51.070726 sshd[1490]: pam_unix(sshd:session): session closed for user core Jun 25 14:14:51.071973 kernel: audit: type=1130 audit(1719324891.066:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.072035 kernel: audit: type=1106 audit(1719324891.068:157): pid=1496 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.072054 kernel: audit: type=1104 audit(1719324891.068:158): pid=1496 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.068000 audit[1496]: CRED_DISP pid=1496 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.073931 kernel: audit: type=1106 audit(1719324891.071:159): pid=1490 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:14:51.071000 audit[1490]: USER_END pid=1490 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:14:51.071000 audit[1490]: CRED_DISP pid=1490 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:14:51.078189 kernel: audit: type=1104 audit(1719324891.071:160): pid=1490 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:14:51.080344 systemd[1]: Started sshd@6-10.0.0.23:22-10.0.0.1:56284.service - OpenSSH per-connection server daemon (10.0.0.1:56284). Jun 25 14:14:51.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.23:22-10.0.0.1:56284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.080856 systemd[1]: sshd@5-10.0.0.23:22-10.0.0.1:56272.service: Deactivated successfully. Jun 25 14:14:51.081935 systemd-logind[1335]: Session 6 logged out. Waiting for processes to exit. Jun 25 14:14:51.081982 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 14:14:51.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.23:22-10.0.0.1:56272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.082972 kernel: audit: type=1130 audit(1719324891.079:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.23:22-10.0.0.1:56284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.084289 systemd-logind[1335]: Removed session 6. Jun 25 14:14:51.111000 audit[1523]: USER_ACCT pid=1523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:14:51.112936 sshd[1523]: Accepted publickey for core from 10.0.0.1 port 56284 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:14:51.112000 audit[1523]: CRED_ACQ pid=1523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:14:51.113000 audit[1523]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffadfd530 a2=3 a3=1 items=0 ppid=1 pid=1523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.113000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:14:51.114195 sshd[1523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:14:51.117753 systemd-logind[1335]: New session 7 of user core. Jun 25 14:14:51.125202 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 14:14:51.127000 audit[1523]: USER_START pid=1523 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:14:51.129000 audit[1528]: CRED_ACQ pid=1528 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:14:51.176000 audit[1529]: USER_ACCT pid=1529 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.177862 sudo[1529]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 14:14:51.177000 audit[1529]: CRED_REFR pid=1529 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.178128 sudo[1529]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:14:51.178000 audit[1529]: USER_START pid=1529 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:14:51.287284 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 14:14:51.528972 dockerd[1539]: time="2024-06-25T14:14:51.528911175Z" level=info msg="Starting up" Jun 25 14:14:51.708492 dockerd[1539]: time="2024-06-25T14:14:51.708375975Z" level=info msg="Loading containers: start." Jun 25 14:14:51.752000 audit[1574]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.752000 audit[1574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff34c6420 a2=0 a3=1 items=0 ppid=1539 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.752000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 14:14:51.754000 audit[1576]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.754000 audit[1576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffea0f7d0 a2=0 a3=1 items=0 ppid=1539 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.754000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 14:14:51.756000 audit[1578]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.756000 audit[1578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe41b7370 a2=0 a3=1 items=0 ppid=1539 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.756000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:14:51.758000 audit[1580]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.758000 audit[1580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffce8977d0 a2=0 a3=1 items=0 ppid=1539 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.758000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:14:51.762000 audit[1582]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.762000 audit[1582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe6f6c490 a2=0 a3=1 items=0 ppid=1539 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.762000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 14:14:51.764000 audit[1584]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.764000 audit[1584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffede4c140 a2=0 a3=1 items=0 ppid=1539 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.764000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 14:14:51.772000 audit[1586]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.772000 audit[1586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd29fe910 a2=0 a3=1 items=0 ppid=1539 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.772000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 14:14:51.774000 audit[1588]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.774000 audit[1588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffffb73bee0 a2=0 a3=1 items=0 ppid=1539 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.774000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 14:14:51.776000 audit[1590]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.776000 audit[1590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=fffff69d8330 a2=0 a3=1 items=0 ppid=1539 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.776000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:14:51.785000 audit[1594]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.785000 audit[1594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc82b3040 a2=0 a3=1 items=0 ppid=1539 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.785000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:14:51.786000 audit[1595]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.786000 audit[1595]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc70bbb00 a2=0 a3=1 items=0 ppid=1539 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.786000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:14:51.793924 kernel: Initializing XFRM netlink socket Jun 25 14:14:51.818000 audit[1603]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.818000 audit[1603]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffff50adcc0 a2=0 a3=1 items=0 ppid=1539 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.818000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 14:14:51.845000 audit[1606]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.845000 audit[1606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffcb95ccf0 a2=0 a3=1 items=0 ppid=1539 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.845000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 14:14:51.849000 audit[1610]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.849000 audit[1610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffdf8c80a0 a2=0 a3=1 items=0 ppid=1539 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.849000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 14:14:51.851000 audit[1612]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.851000 audit[1612]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffdd85a570 a2=0 a3=1 items=0 ppid=1539 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.851000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 14:14:51.853000 audit[1614]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.853000 audit[1614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffff487e8c0 a2=0 a3=1 items=0 ppid=1539 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.853000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 14:14:51.856000 audit[1616]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.856000 audit[1616]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffeb70a3f0 a2=0 a3=1 items=0 ppid=1539 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.856000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 14:14:51.858000 audit[1618]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.858000 audit[1618]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffff33cb050 a2=0 a3=1 items=0 ppid=1539 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.858000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 14:14:51.863000 audit[1621]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.863000 audit[1621]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffe94eb900 a2=0 a3=1 items=0 ppid=1539 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.863000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 14:14:51.866000 audit[1623]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.866000 audit[1623]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffddad5320 a2=0 a3=1 items=0 ppid=1539 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.866000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:14:51.868000 audit[1625]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.868000 audit[1625]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc322fd90 a2=0 a3=1 items=0 ppid=1539 pid=1625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.868000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:14:51.870000 audit[1627]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.870000 audit[1627]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe8097c80 a2=0 a3=1 items=0 ppid=1539 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.870000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 14:14:51.871484 systemd-networkd[1138]: docker0: Link UP Jun 25 14:14:51.878000 audit[1631]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.878000 audit[1631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdfde3a30 a2=0 a3=1 items=0 ppid=1539 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.878000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:14:51.879000 audit[1632]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:14:51.879000 audit[1632]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffda94f6c0 a2=0 a3=1 items=0 ppid=1539 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:14:51.879000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:14:51.881490 dockerd[1539]: time="2024-06-25T14:14:51.881438135Z" level=info msg="Loading containers: done." Jun 25 14:14:51.943033 dockerd[1539]: time="2024-06-25T14:14:51.942967615Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 14:14:51.943215 dockerd[1539]: time="2024-06-25T14:14:51.943182815Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 14:14:51.943333 dockerd[1539]: time="2024-06-25T14:14:51.943309455Z" level=info msg="Daemon has completed initialization" Jun 25 14:14:51.968435 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 14:14:51.969047 dockerd[1539]: time="2024-06-25T14:14:51.968191135Z" level=info msg="API listen on /run/docker.sock" Jun 25 14:14:51.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:52.524991 containerd[1350]: time="2024-06-25T14:14:52.524930455Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 14:14:53.117678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4124323475.mount: Deactivated successfully. Jun 25 14:14:54.181081 containerd[1350]: time="2024-06-25T14:14:54.181022095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:54.181592 containerd[1350]: time="2024-06-25T14:14:54.181558895Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671540" Jun 25 14:14:54.182353 containerd[1350]: time="2024-06-25T14:14:54.182322375Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:54.185228 containerd[1350]: time="2024-06-25T14:14:54.185192775Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:54.187237 containerd[1350]: time="2024-06-25T14:14:54.187205575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:54.188587 containerd[1350]: time="2024-06-25T14:14:54.188553215Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 1.66355932s" Jun 25 14:14:54.188644 containerd[1350]: time="2024-06-25T14:14:54.188594815Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jun 25 14:14:54.209742 containerd[1350]: time="2024-06-25T14:14:54.209695615Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 14:14:55.612937 containerd[1350]: time="2024-06-25T14:14:55.612867095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:55.613486 containerd[1350]: time="2024-06-25T14:14:55.613452215Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893120" Jun 25 14:14:55.614262 containerd[1350]: time="2024-06-25T14:14:55.614229095Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:55.618792 containerd[1350]: time="2024-06-25T14:14:55.618749335Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:55.620829 containerd[1350]: time="2024-06-25T14:14:55.620798335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:55.622231 containerd[1350]: time="2024-06-25T14:14:55.622198615Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.41245736s" Jun 25 14:14:55.622308 containerd[1350]: time="2024-06-25T14:14:55.622232615Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jun 25 14:14:55.641818 containerd[1350]: time="2024-06-25T14:14:55.641768375Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 14:14:55.989399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 14:14:55.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:55.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:55.989571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:14:55.999163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:14:56.093736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:14:56.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:56.096549 kernel: kauditd_printk_skb: 86 callbacks suppressed Jun 25 14:14:56.096644 kernel: audit: type=1130 audit(1719324896.093:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:14:56.151080 kubelet[1760]: E0625 14:14:56.151018 1760 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:14:56.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:14:56.154090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:14:56.154246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:14:56.156915 kernel: audit: type=1131 audit(1719324896.153:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:14:56.701159 containerd[1350]: time="2024-06-25T14:14:56.701101535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:56.701572 containerd[1350]: time="2024-06-25T14:14:56.701524335Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358440" Jun 25 14:14:56.702459 containerd[1350]: time="2024-06-25T14:14:56.702423295Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:56.706069 containerd[1350]: time="2024-06-25T14:14:56.706031335Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:56.708398 containerd[1350]: time="2024-06-25T14:14:56.708354575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:56.710146 containerd[1350]: time="2024-06-25T14:14:56.710107815Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 1.06829044s" Jun 25 14:14:56.710188 containerd[1350]: time="2024-06-25T14:14:56.710148335Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jun 25 14:14:56.729629 containerd[1350]: time="2024-06-25T14:14:56.729569855Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 14:14:57.668745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2337686532.mount: Deactivated successfully. Jun 25 14:14:58.041621 containerd[1350]: time="2024-06-25T14:14:58.041563415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:58.042388 containerd[1350]: time="2024-06-25T14:14:58.042335815Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772463" Jun 25 14:14:58.043408 containerd[1350]: time="2024-06-25T14:14:58.043361495Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:58.046504 containerd[1350]: time="2024-06-25T14:14:58.046464815Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:58.052026 containerd[1350]: time="2024-06-25T14:14:58.051984895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:58.052833 containerd[1350]: time="2024-06-25T14:14:58.052793455Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.32316324s" Jun 25 14:14:58.052867 containerd[1350]: time="2024-06-25T14:14:58.052831535Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jun 25 14:14:58.071502 containerd[1350]: time="2024-06-25T14:14:58.071448695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 14:14:58.538423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601832514.mount: Deactivated successfully. Jun 25 14:14:58.544152 containerd[1350]: time="2024-06-25T14:14:58.544096095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:58.545018 containerd[1350]: time="2024-06-25T14:14:58.544971255Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jun 25 14:14:58.546142 containerd[1350]: time="2024-06-25T14:14:58.546096255Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:58.548606 containerd[1350]: time="2024-06-25T14:14:58.548132215Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:58.550548 containerd[1350]: time="2024-06-25T14:14:58.550502535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:14:58.551485 containerd[1350]: time="2024-06-25T14:14:58.551429495Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 479.92956ms" Jun 25 14:14:58.551530 containerd[1350]: time="2024-06-25T14:14:58.551483615Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 14:14:58.573007 containerd[1350]: time="2024-06-25T14:14:58.572966575Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 14:14:59.140048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94448469.mount: Deactivated successfully. Jun 25 14:15:02.119524 containerd[1350]: time="2024-06-25T14:15:02.119478015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:02.120275 containerd[1350]: time="2024-06-25T14:15:02.120244295Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jun 25 14:15:02.122047 containerd[1350]: time="2024-06-25T14:15:02.121426375Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:02.124476 containerd[1350]: time="2024-06-25T14:15:02.123836495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:02.126204 containerd[1350]: time="2024-06-25T14:15:02.126169415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:02.127697 containerd[1350]: time="2024-06-25T14:15:02.127651575Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.55447196s" Jun 25 14:15:02.127822 containerd[1350]: time="2024-06-25T14:15:02.127801535Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jun 25 14:15:02.151367 containerd[1350]: time="2024-06-25T14:15:02.151306335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 14:15:02.774243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930658674.mount: Deactivated successfully. Jun 25 14:15:03.152118 containerd[1350]: time="2024-06-25T14:15:03.151973935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:03.152942 containerd[1350]: time="2024-06-25T14:15:03.152888935Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Jun 25 14:15:03.154270 containerd[1350]: time="2024-06-25T14:15:03.154238775Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:03.156065 containerd[1350]: time="2024-06-25T14:15:03.156032735Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:03.157601 containerd[1350]: time="2024-06-25T14:15:03.157552975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:03.158644 containerd[1350]: time="2024-06-25T14:15:03.158602375Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.00724768s" Jun 25 14:15:03.158688 containerd[1350]: time="2024-06-25T14:15:03.158644735Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jun 25 14:15:06.239428 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 14:15:06.239603 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:15:06.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:06.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:06.245240 kernel: audit: type=1130 audit(1719324906.238:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:06.245299 kernel: audit: type=1131 audit(1719324906.238:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:06.249231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:15:06.343292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:15:06.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:06.348177 kernel: audit: type=1130 audit(1719324906.341:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:06.389442 kubelet[1942]: E0625 14:15:06.389383 1942 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:15:06.391804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:15:06.391981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:15:06.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:15:06.394909 kernel: audit: type=1131 audit(1719324906.391:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:15:07.717597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:15:07.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:07.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:07.721505 kernel: audit: type=1130 audit(1719324907.717:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:07.721570 kernel: audit: type=1131 audit(1719324907.717:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:07.727326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:15:07.745724 systemd[1]: Reloading. Jun 25 14:15:07.961829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:15:08.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:08.035297 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:15:08.036801 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:15:08.036918 kernel: audit: type=1130 audit(1719324908.033:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:08.037229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:15:08.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:08.039931 kernel: audit: type=1131 audit(1719324908.036:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:08.049259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:15:08.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:08.139046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:15:08.141975 kernel: audit: type=1130 audit(1719324908.138:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:08.186042 kubelet[2034]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:15:08.186042 kubelet[2034]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:15:08.186042 kubelet[2034]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:15:08.186881 kubelet[2034]: I0625 14:15:08.186159 2034 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:15:09.067089 kubelet[2034]: I0625 14:15:09.067048 2034 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 14:15:09.067089 kubelet[2034]: I0625 14:15:09.067078 2034 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:15:09.067323 kubelet[2034]: I0625 14:15:09.067308 2034 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 14:15:09.088056 kubelet[2034]: I0625 14:15:09.088029 2034 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:15:09.092591 kubelet[2034]: E0625 14:15:09.092548 2034 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.106682 kubelet[2034]: W0625 14:15:09.106648 2034 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 14:15:09.107517 kubelet[2034]: I0625 14:15:09.107492 2034 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:15:09.107883 kubelet[2034]: I0625 14:15:09.107858 2034 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:15:09.108082 kubelet[2034]: I0625 14:15:09.108055 2034 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:15:09.108189 kubelet[2034]: I0625 14:15:09.108087 2034 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:15:09.108189 kubelet[2034]: I0625 14:15:09.108096 2034 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:15:09.108334 kubelet[2034]: I0625 14:15:09.108302 2034 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:15:09.109518 kubelet[2034]: I0625 14:15:09.109493 2034 kubelet.go:393] "Attempting to sync node with API server" Jun 25 14:15:09.109518 kubelet[2034]: I0625 14:15:09.109521 2034 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:15:09.109633 kubelet[2034]: I0625 14:15:09.109617 2034 kubelet.go:309] "Adding apiserver pod source" Jun 25 14:15:09.109666 kubelet[2034]: I0625 14:15:09.109641 2034 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:15:09.114205 kubelet[2034]: W0625 14:15:09.114140 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.114205 kubelet[2034]: E0625 14:15:09.114207 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.114416 kubelet[2034]: W0625 14:15:09.114372 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.114508 kubelet[2034]: E0625 14:15:09.114496 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.114884 kubelet[2034]: I0625 14:15:09.114863 2034 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:15:09.117769 kubelet[2034]: W0625 14:15:09.117742 2034 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 14:15:09.119209 kubelet[2034]: I0625 14:15:09.119183 2034 server.go:1232] "Started kubelet" Jun 25 14:15:09.119443 kubelet[2034]: I0625 14:15:09.119430 2034 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:15:09.119718 kubelet[2034]: I0625 14:15:09.119697 2034 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 14:15:09.120070 kubelet[2034]: I0625 14:15:09.120042 2034 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:15:09.120240 kubelet[2034]: I0625 14:15:09.120220 2034 server.go:462] "Adding debug handlers to kubelet server" Jun 25 14:15:09.121161 kubelet[2034]: E0625 14:15:09.121143 2034 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 14:15:09.121161 kubelet[2034]: E0625 14:15:09.121168 2034 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:15:09.123386 kubelet[2034]: I0625 14:15:09.123361 2034 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:15:09.122000 audit[2046]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:09.122000 audit[2046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffeff0bed0 a2=0 a3=1 items=0 ppid=2034 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.122000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:15:09.127914 kernel: audit: type=1325 audit(1719324909.122:209): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:09.127949 kubelet[2034]: E0625 14:15:09.127669 2034 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 14:15:09.127949 kubelet[2034]: I0625 14:15:09.127701 2034 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:15:09.127949 kubelet[2034]: I0625 14:15:09.127802 2034 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:15:09.127949 kubelet[2034]: I0625 14:15:09.127860 2034 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:15:09.128225 kubelet[2034]: W0625 14:15:09.128181 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.128264 kubelet[2034]: E0625 14:15:09.128230 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.128478 kubelet[2034]: E0625 14:15:09.128412 2034 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="200ms" Jun 25 14:15:09.129054 kubelet[2034]: E0625 14:15:09.128962 2034 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc44e53d8735e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 14, 15, 9, 119157735, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 14, 15, 9, 119157735, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.23:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.23:6443: connect: connection refused'(may retry after sleeping) Jun 25 14:15:09.128000 audit[2047]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:09.128000 audit[2047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffcaff560 a2=0 a3=1 items=0 ppid=2034 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:15:09.128000 audit[2049]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:09.128000 audit[2049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff69d38a0 a2=0 a3=1 items=0 ppid=2034 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:15:09.133000 audit[2051]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:09.133000 audit[2051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffa1c67d0 a2=0 a3=1 items=0 ppid=2034 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:15:09.141000 audit[2056]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:09.141000 audit[2056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff896acf0 a2=0 a3=1 items=0 ppid=2034 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.141000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 14:15:09.143004 kubelet[2034]: I0625 14:15:09.142968 2034 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:15:09.142000 audit[2057]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:09.142000 audit[2057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc498e5d0 a2=0 a3=1 items=0 ppid=2034 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.142000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:15:09.143974 kubelet[2034]: I0625 14:15:09.143964 2034 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:15:09.144004 kubelet[2034]: I0625 14:15:09.143981 2034 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:15:09.144004 kubelet[2034]: I0625 14:15:09.143996 2034 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 14:15:09.144064 kubelet[2034]: E0625 14:15:09.144047 2034 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:15:09.143000 audit[2058]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:09.143000 audit[2058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff68d9500 a2=0 a3=1 items=0 ppid=2034 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:15:09.144000 audit[2059]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:09.144000 audit[2059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3fad360 a2=0 a3=1 items=0 ppid=2034 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:15:09.145000 audit[2060]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:09.145000 audit[2060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6de42a0 a2=0 a3=1 items=0 ppid=2034 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:15:09.147733 kubelet[2034]: W0625 14:15:09.147675 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.147733 kubelet[2034]: E0625 14:15:09.147730 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.147000 audit[2062]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:09.147000 audit[2062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe86d63d0 a2=0 a3=1 items=0 ppid=2034 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:15:09.147000 audit[2063]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:09.147000 audit[2063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffc0288550 a2=0 a3=1 items=0 ppid=2034 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:15:09.148000 audit[2064]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:09.148000 audit[2064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdf58ccf0 a2=0 a3=1 items=0 ppid=2034 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:09.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:15:09.164160 kubelet[2034]: I0625 14:15:09.164118 2034 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:15:09.164160 kubelet[2034]: I0625 14:15:09.164140 2034 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:15:09.164160 kubelet[2034]: I0625 14:15:09.164158 2034 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:15:09.165812 kubelet[2034]: I0625 14:15:09.165786 2034 policy_none.go:49] "None policy: Start" Jun 25 14:15:09.166684 kubelet[2034]: I0625 14:15:09.166656 2034 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 14:15:09.166684 kubelet[2034]: I0625 14:15:09.166686 2034 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:15:09.174761 kubelet[2034]: I0625 14:15:09.174079 2034 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:15:09.175157 kubelet[2034]: I0625 14:15:09.175133 2034 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:15:09.175455 kubelet[2034]: E0625 14:15:09.175416 2034 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 14:15:09.229681 kubelet[2034]: I0625 14:15:09.229635 2034 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 14:15:09.230147 kubelet[2034]: E0625 14:15:09.230111 2034 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Jun 25 14:15:09.244271 kubelet[2034]: I0625 14:15:09.244239 2034 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 14:15:09.245435 kubelet[2034]: I0625 14:15:09.245405 2034 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 14:15:09.246696 kubelet[2034]: I0625 14:15:09.246660 2034 topology_manager.go:215] "Topology Admit Handler" podUID="13e4b562694da8bec2716520029b7966" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 14:15:09.329030 kubelet[2034]: E0625 14:15:09.328925 2034 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="400ms" Jun 25 14:15:09.330064 kubelet[2034]: I0625 14:15:09.330042 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:15:09.330340 kubelet[2034]: I0625 14:15:09.330253 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13e4b562694da8bec2716520029b7966-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"13e4b562694da8bec2716520029b7966\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:15:09.330400 kubelet[2034]: I0625 14:15:09.330379 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:15:09.330570 kubelet[2034]: I0625 14:15:09.330435 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:15:09.330570 kubelet[2034]: I0625 14:15:09.330492 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:15:09.330570 kubelet[2034]: I0625 14:15:09.330540 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 14:15:09.331741 kubelet[2034]: I0625 14:15:09.330686 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13e4b562694da8bec2716520029b7966-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"13e4b562694da8bec2716520029b7966\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:15:09.331741 kubelet[2034]: I0625 14:15:09.330739 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13e4b562694da8bec2716520029b7966-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"13e4b562694da8bec2716520029b7966\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:15:09.331741 kubelet[2034]: I0625 14:15:09.330814 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:15:09.432031 kubelet[2034]: I0625 14:15:09.431999 2034 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 14:15:09.432586 kubelet[2034]: E0625 14:15:09.432565 2034 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Jun 25 14:15:09.551129 kubelet[2034]: E0625 14:15:09.551086 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:09.551250 kubelet[2034]: E0625 14:15:09.551090 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:09.551355 kubelet[2034]: E0625 14:15:09.551196 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:09.557076 containerd[1350]: time="2024-06-25T14:15:09.556996535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jun 25 14:15:09.557379 containerd[1350]: time="2024-06-25T14:15:09.557133695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:13e4b562694da8bec2716520029b7966,Namespace:kube-system,Attempt:0,}" Jun 25 14:15:09.557691 containerd[1350]: time="2024-06-25T14:15:09.557534975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jun 25 14:15:09.730112 kubelet[2034]: E0625 14:15:09.730022 2034 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="800ms" Jun 25 14:15:09.834566 kubelet[2034]: I0625 14:15:09.834531 2034 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 14:15:09.834913 kubelet[2034]: E0625 14:15:09.834872 2034 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Jun 25 14:15:09.949052 kubelet[2034]: W0625 14:15:09.948987 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.949052 kubelet[2034]: E0625 14:15:09.949050 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.965859 kubelet[2034]: W0625 14:15:09.965790 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:09.965859 kubelet[2034]: E0625 14:15:09.965859 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:10.008553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232458706.mount: Deactivated successfully. Jun 25 14:15:10.013546 containerd[1350]: time="2024-06-25T14:15:10.013493215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.014019 containerd[1350]: time="2024-06-25T14:15:10.013985375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jun 25 14:15:10.014777 containerd[1350]: time="2024-06-25T14:15:10.014733055Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.015765 containerd[1350]: time="2024-06-25T14:15:10.015728255Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.016581 containerd[1350]: time="2024-06-25T14:15:10.016493455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:15:10.016675 containerd[1350]: time="2024-06-25T14:15:10.016646455Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.017308 containerd[1350]: time="2024-06-25T14:15:10.017269375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:15:10.018493 containerd[1350]: time="2024-06-25T14:15:10.018461735Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.020984 containerd[1350]: time="2024-06-25T14:15:10.020954175Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.023630 containerd[1350]: time="2024-06-25T14:15:10.023600215Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.025777 containerd[1350]: time="2024-06-25T14:15:10.025739095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 468.64148ms" Jun 25 14:15:10.026705 containerd[1350]: time="2024-06-25T14:15:10.026670375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 469.0624ms" Jun 25 14:15:10.027422 containerd[1350]: time="2024-06-25T14:15:10.027396935Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.028879 containerd[1350]: time="2024-06-25T14:15:10.028841055Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 471.61624ms" Jun 25 14:15:10.030238 containerd[1350]: time="2024-06-25T14:15:10.030202455Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.031120 containerd[1350]: time="2024-06-25T14:15:10.031091855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.031989 containerd[1350]: time="2024-06-25T14:15:10.031960535Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.032829 containerd[1350]: time="2024-06-25T14:15:10.032803095Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:15:10.173102 containerd[1350]: time="2024-06-25T14:15:10.172966135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:15:10.173102 containerd[1350]: time="2024-06-25T14:15:10.173057615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:10.173102 containerd[1350]: time="2024-06-25T14:15:10.173097095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:15:10.173291 containerd[1350]: time="2024-06-25T14:15:10.173114215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:10.175642 containerd[1350]: time="2024-06-25T14:15:10.175355255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:15:10.175642 containerd[1350]: time="2024-06-25T14:15:10.175484815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:10.175642 containerd[1350]: time="2024-06-25T14:15:10.175502055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:15:10.175642 containerd[1350]: time="2024-06-25T14:15:10.175512295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:10.178064 containerd[1350]: time="2024-06-25T14:15:10.177464055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:15:10.178064 containerd[1350]: time="2024-06-25T14:15:10.177501295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:10.178064 containerd[1350]: time="2024-06-25T14:15:10.177516495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:15:10.178064 containerd[1350]: time="2024-06-25T14:15:10.177526935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:10.200849 kubelet[2034]: W0625 14:15:10.200786 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:10.200849 kubelet[2034]: E0625 14:15:10.200827 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:10.220202 containerd[1350]: time="2024-06-25T14:15:10.220148615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:13e4b562694da8bec2716520029b7966,Namespace:kube-system,Attempt:0,} returns sandbox id \"a86ffe8c72d0e88317a50097aa98ab80d01eaa261a36f25ec720cf17cad341a1\"" Jun 25 14:15:10.221555 containerd[1350]: time="2024-06-25T14:15:10.221515895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"990e26c451820ceb344e54928b1cdb839c8b4aa2f950476748c2d35c1f985175\"" Jun 25 14:15:10.232285 containerd[1350]: time="2024-06-25T14:15:10.229182655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"585df5ded95c6dd6ee6441010457409bd9ecb7f25db0f9916c1617dd73dbb1a8\"" Jun 25 14:15:10.237088 kubelet[2034]: E0625 14:15:10.235142 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:10.237427 kubelet[2034]: E0625 14:15:10.237234 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:10.241869 kubelet[2034]: E0625 14:15:10.239065 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:10.245633 containerd[1350]: time="2024-06-25T14:15:10.245594215Z" level=info msg="CreateContainer within sandbox \"585df5ded95c6dd6ee6441010457409bd9ecb7f25db0f9916c1617dd73dbb1a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 14:15:10.245748 containerd[1350]: time="2024-06-25T14:15:10.245611495Z" level=info msg="CreateContainer within sandbox \"a86ffe8c72d0e88317a50097aa98ab80d01eaa261a36f25ec720cf17cad341a1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 14:15:10.245949 containerd[1350]: time="2024-06-25T14:15:10.245619615Z" level=info msg="CreateContainer within sandbox \"990e26c451820ceb344e54928b1cdb839c8b4aa2f950476748c2d35c1f985175\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 14:15:10.333331 containerd[1350]: time="2024-06-25T14:15:10.333207335Z" level=info msg="CreateContainer within sandbox \"990e26c451820ceb344e54928b1cdb839c8b4aa2f950476748c2d35c1f985175\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f912f368e47678b3b55e6e5db05830bc0be3bc742e4dfc5fadc4fe280aa01ba8\"" Jun 25 14:15:10.334625 containerd[1350]: time="2024-06-25T14:15:10.334592135Z" level=info msg="StartContainer for \"f912f368e47678b3b55e6e5db05830bc0be3bc742e4dfc5fadc4fe280aa01ba8\"" Jun 25 14:15:10.334835 containerd[1350]: time="2024-06-25T14:15:10.334662335Z" level=info msg="CreateContainer within sandbox \"585df5ded95c6dd6ee6441010457409bd9ecb7f25db0f9916c1617dd73dbb1a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d491bb8711e38b8bac04a2177a37634dd41867b0a86fcc2d47b30551d8f8a48e\"" Jun 25 14:15:10.335133 containerd[1350]: time="2024-06-25T14:15:10.335102975Z" level=info msg="StartContainer for \"d491bb8711e38b8bac04a2177a37634dd41867b0a86fcc2d47b30551d8f8a48e\"" Jun 25 14:15:10.337174 containerd[1350]: time="2024-06-25T14:15:10.337117455Z" level=info msg="CreateContainer within sandbox \"a86ffe8c72d0e88317a50097aa98ab80d01eaa261a36f25ec720cf17cad341a1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1f43b2ca183ce71f1a925ddfa8adb97a9824f8bb0b993ab698f5bbcf5893b71e\"" Jun 25 14:15:10.337569 containerd[1350]: time="2024-06-25T14:15:10.337529295Z" level=info msg="StartContainer for \"1f43b2ca183ce71f1a925ddfa8adb97a9824f8bb0b993ab698f5bbcf5893b71e\"" Jun 25 14:15:10.414338 containerd[1350]: time="2024-06-25T14:15:10.413543615Z" level=info msg="StartContainer for \"1f43b2ca183ce71f1a925ddfa8adb97a9824f8bb0b993ab698f5bbcf5893b71e\" returns successfully" Jun 25 14:15:10.414338 containerd[1350]: time="2024-06-25T14:15:10.413675775Z" level=info msg="StartContainer for \"f912f368e47678b3b55e6e5db05830bc0be3bc742e4dfc5fadc4fe280aa01ba8\" returns successfully" Jun 25 14:15:10.426685 containerd[1350]: time="2024-06-25T14:15:10.426634175Z" level=info msg="StartContainer for \"d491bb8711e38b8bac04a2177a37634dd41867b0a86fcc2d47b30551d8f8a48e\" returns successfully" Jun 25 14:15:10.530823 kubelet[2034]: E0625 14:15:10.530767 2034 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="1.6s" Jun 25 14:15:10.536446 kubelet[2034]: W0625 14:15:10.536330 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:10.536446 kubelet[2034]: E0625 14:15:10.536403 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jun 25 14:15:10.637078 kubelet[2034]: I0625 14:15:10.636295 2034 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 14:15:11.157447 kubelet[2034]: E0625 14:15:11.157420 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:11.159247 kubelet[2034]: E0625 14:15:11.159228 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:11.160815 kubelet[2034]: E0625 14:15:11.160796 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:12.162284 kubelet[2034]: E0625 14:15:12.162255 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:12.602616 kubelet[2034]: E0625 14:15:12.602585 2034 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 14:15:12.682543 kubelet[2034]: I0625 14:15:12.682510 2034 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 14:15:13.111733 kubelet[2034]: I0625 14:15:13.111701 2034 apiserver.go:52] "Watching apiserver" Jun 25 14:15:13.127967 kubelet[2034]: I0625 14:15:13.127924 2034 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:15:15.282186 systemd[1]: Reloading. Jun 25 14:15:15.452871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:15:15.538241 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:15:15.552349 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:15:15.552674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:15:15.554937 kernel: kauditd_printk_skb: 35 callbacks suppressed Jun 25 14:15:15.554977 kernel: audit: type=1131 audit(1719324915.551:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:15.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:15.566868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:15:15.712201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:15:15.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:15.717346 kernel: audit: type=1130 audit(1719324915.711:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:15.778399 kubelet[2386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:15:15.778399 kubelet[2386]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:15:15.778399 kubelet[2386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:15:15.778399 kubelet[2386]: I0625 14:15:15.777653 2386 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:15:15.785057 kubelet[2386]: I0625 14:15:15.782906 2386 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 14:15:15.785057 kubelet[2386]: I0625 14:15:15.782943 2386 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:15:15.785057 kubelet[2386]: I0625 14:15:15.783160 2386 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 14:15:15.785057 kubelet[2386]: I0625 14:15:15.784773 2386 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 14:15:15.785744 kubelet[2386]: I0625 14:15:15.785724 2386 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:15:15.794261 kubelet[2386]: W0625 14:15:15.793741 2386 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 14:15:15.794570 kubelet[2386]: I0625 14:15:15.794547 2386 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:15:15.794954 kubelet[2386]: I0625 14:15:15.794933 2386 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:15:15.795105 kubelet[2386]: I0625 14:15:15.795082 2386 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:15:15.795191 kubelet[2386]: I0625 14:15:15.795113 2386 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:15:15.795191 kubelet[2386]: I0625 14:15:15.795121 2386 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:15:15.795191 kubelet[2386]: I0625 14:15:15.795159 2386 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:15:15.795270 kubelet[2386]: I0625 14:15:15.795237 2386 kubelet.go:393] "Attempting to sync node with API server" Jun 25 14:15:15.795270 kubelet[2386]: I0625 14:15:15.795249 2386 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:15:15.795322 kubelet[2386]: I0625 14:15:15.795272 2386 kubelet.go:309] "Adding apiserver pod source" Jun 25 14:15:15.795322 kubelet[2386]: I0625 14:15:15.795282 2386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:15:15.795731 kubelet[2386]: I0625 14:15:15.795696 2386 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:15:15.798181 kubelet[2386]: I0625 14:15:15.798164 2386 server.go:1232] "Started kubelet" Jun 25 14:15:15.799433 kubelet[2386]: I0625 14:15:15.799348 2386 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 14:15:15.799780 kubelet[2386]: I0625 14:15:15.799767 2386 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:15:15.800423 kubelet[2386]: I0625 14:15:15.800241 2386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:15:15.800423 kubelet[2386]: I0625 14:15:15.799801 2386 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:15:15.800423 kubelet[2386]: E0625 14:15:15.799351 2386 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 14:15:15.800423 kubelet[2386]: E0625 14:15:15.800483 2386 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:15:15.800423 kubelet[2386]: I0625 14:15:15.800848 2386 server.go:462] "Adding debug handlers to kubelet server" Jun 25 14:15:15.800423 kubelet[2386]: I0625 14:15:15.802849 2386 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:15:15.800423 kubelet[2386]: I0625 14:15:15.802946 2386 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:15:15.800423 kubelet[2386]: I0625 14:15:15.803080 2386 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:15:15.838318 kubelet[2386]: I0625 14:15:15.838270 2386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:15:15.839486 kubelet[2386]: I0625 14:15:15.839461 2386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:15:15.839486 kubelet[2386]: I0625 14:15:15.839489 2386 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:15:15.839588 kubelet[2386]: I0625 14:15:15.839510 2386 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 14:15:15.839588 kubelet[2386]: E0625 14:15:15.839560 2386 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:15:15.894721 kubelet[2386]: I0625 14:15:15.894697 2386 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:15:15.894721 kubelet[2386]: I0625 14:15:15.894718 2386 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:15:15.894870 kubelet[2386]: I0625 14:15:15.894736 2386 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:15:15.894914 kubelet[2386]: I0625 14:15:15.894888 2386 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 14:15:15.894950 kubelet[2386]: I0625 14:15:15.894922 2386 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 14:15:15.894950 kubelet[2386]: I0625 14:15:15.894929 2386 policy_none.go:49] "None policy: Start" Jun 25 14:15:15.895491 kubelet[2386]: I0625 14:15:15.895474 2386 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 14:15:15.895542 kubelet[2386]: I0625 14:15:15.895501 2386 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:15:15.895677 kubelet[2386]: I0625 14:15:15.895664 2386 state_mem.go:75] "Updated machine memory state" Jun 25 14:15:15.896713 kubelet[2386]: I0625 14:15:15.896695 2386 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:15:15.896930 kubelet[2386]: I0625 14:15:15.896914 2386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:15:15.910594 kubelet[2386]: I0625 14:15:15.910560 2386 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 14:15:15.918039 kubelet[2386]: I0625 14:15:15.917703 2386 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jun 25 14:15:15.918039 kubelet[2386]: I0625 14:15:15.917793 2386 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 14:15:15.940339 kubelet[2386]: I0625 14:15:15.940313 2386 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 14:15:15.940470 kubelet[2386]: I0625 14:15:15.940424 2386 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 14:15:15.940470 kubelet[2386]: I0625 14:15:15.940455 2386 topology_manager.go:215] "Topology Admit Handler" podUID="13e4b562694da8bec2716520029b7966" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 14:15:16.104628 kubelet[2386]: I0625 14:15:16.104533 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13e4b562694da8bec2716520029b7966-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"13e4b562694da8bec2716520029b7966\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:15:16.104628 kubelet[2386]: I0625 14:15:16.104581 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:15:16.104628 kubelet[2386]: I0625 14:15:16.104603 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:15:16.104628 kubelet[2386]: I0625 14:15:16.104622 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:15:16.104809 kubelet[2386]: I0625 14:15:16.104642 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13e4b562694da8bec2716520029b7966-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"13e4b562694da8bec2716520029b7966\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:15:16.104809 kubelet[2386]: I0625 14:15:16.104661 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13e4b562694da8bec2716520029b7966-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"13e4b562694da8bec2716520029b7966\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:15:16.104809 kubelet[2386]: I0625 14:15:16.104678 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 14:15:16.104809 kubelet[2386]: I0625 14:15:16.104697 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:15:16.104809 kubelet[2386]: I0625 14:15:16.104717 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:15:16.248528 kubelet[2386]: E0625 14:15:16.248487 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:16.248528 kubelet[2386]: E0625 14:15:16.248506 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:16.248749 kubelet[2386]: E0625 14:15:16.248733 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:16.797339 kubelet[2386]: I0625 14:15:16.797293 2386 apiserver.go:52] "Watching apiserver" Jun 25 14:15:16.803321 kubelet[2386]: I0625 14:15:16.803272 2386 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:15:16.849477 kubelet[2386]: E0625 14:15:16.849399 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:16.849477 kubelet[2386]: E0625 14:15:16.849418 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:16.849477 kubelet[2386]: E0625 14:15:16.849477 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:16.878684 kubelet[2386]: I0625 14:15:16.878654 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.878591766 podCreationTimestamp="2024-06-25 14:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:15:16.878584326 +0000 UTC m=+1.159898392" watchObservedRunningTime="2024-06-25 14:15:16.878591766 +0000 UTC m=+1.159905792" Jun 25 14:15:16.892213 kubelet[2386]: I0625 14:15:16.892184 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.89212972 podCreationTimestamp="2024-06-25 14:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:15:16.89194856 +0000 UTC m=+1.173262586" watchObservedRunningTime="2024-06-25 14:15:16.89212972 +0000 UTC m=+1.173443746" Jun 25 14:15:16.900525 kubelet[2386]: I0625 14:15:16.900495 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.900451332 podCreationTimestamp="2024-06-25 14:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:15:16.900427412 +0000 UTC m=+1.181741438" watchObservedRunningTime="2024-06-25 14:15:16.900451332 +0000 UTC m=+1.181765358" Jun 25 14:15:17.851118 kubelet[2386]: E0625 14:15:17.851084 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:17.851498 kubelet[2386]: E0625 14:15:17.851320 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:18.853107 kubelet[2386]: E0625 14:15:18.853047 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:21.030431 sudo[1529]: pam_unix(sudo:session): session closed for user root Jun 25 14:15:21.029000 audit[1529]: USER_END pid=1529 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:15:21.029000 audit[1529]: CRED_DISP pid=1529 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:15:21.035120 kernel: audit: type=1106 audit(1719324921.029:223): pid=1529 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:15:21.035197 kernel: audit: type=1104 audit(1719324921.029:224): pid=1529 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:15:21.035451 sshd[1523]: pam_unix(sshd:session): session closed for user core Jun 25 14:15:21.035000 audit[1523]: USER_END pid=1523 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:21.038953 systemd-logind[1335]: Session 7 logged out. Waiting for processes to exit. Jun 25 14:15:21.039259 systemd[1]: sshd@6-10.0.0.23:22-10.0.0.1:56284.service: Deactivated successfully. Jun 25 14:15:21.036000 audit[1523]: CRED_DISP pid=1523 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:21.040147 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 14:15:21.040669 systemd-logind[1335]: Removed session 7. Jun 25 14:15:21.041751 kernel: audit: type=1106 audit(1719324921.035:225): pid=1523 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:21.041809 kernel: audit: type=1104 audit(1719324921.036:226): pid=1523 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:21.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.23:22-10.0.0.1:56284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:21.043936 kernel: audit: type=1131 audit(1719324921.038:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.23:22-10.0.0.1:56284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:23.924549 kubelet[2386]: E0625 14:15:23.924497 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:24.866174 kubelet[2386]: E0625 14:15:24.865714 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:27.843280 kubelet[2386]: E0625 14:15:27.843251 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:28.417623 kubelet[2386]: I0625 14:15:28.417595 2386 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 14:15:28.418155 containerd[1350]: time="2024-06-25T14:15:28.418117227Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 14:15:28.418480 kubelet[2386]: I0625 14:15:28.418353 2386 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 14:15:28.596820 kubelet[2386]: E0625 14:15:28.596786 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:28.716689 update_engine[1337]: I0625 14:15:28.716082 1337 update_attempter.cc:509] Updating boot flags... Jun 25 14:15:28.748585 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2482) Jun 25 14:15:28.870951 kubelet[2386]: E0625 14:15:28.870924 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:29.087816 kubelet[2386]: I0625 14:15:29.087784 2386 topology_manager.go:215] "Topology Admit Handler" podUID="dda608a5-c4e9-4a1b-b0eb-505a3f18f7ab" podNamespace="kube-system" podName="kube-proxy-76b44" Jun 25 14:15:29.188036 kubelet[2386]: I0625 14:15:29.188002 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dda608a5-c4e9-4a1b-b0eb-505a3f18f7ab-lib-modules\") pod \"kube-proxy-76b44\" (UID: \"dda608a5-c4e9-4a1b-b0eb-505a3f18f7ab\") " pod="kube-system/kube-proxy-76b44" Jun 25 14:15:29.188230 kubelet[2386]: I0625 14:15:29.188217 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dda608a5-c4e9-4a1b-b0eb-505a3f18f7ab-kube-proxy\") pod \"kube-proxy-76b44\" (UID: \"dda608a5-c4e9-4a1b-b0eb-505a3f18f7ab\") " pod="kube-system/kube-proxy-76b44" Jun 25 14:15:29.188330 kubelet[2386]: I0625 14:15:29.188318 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dda608a5-c4e9-4a1b-b0eb-505a3f18f7ab-xtables-lock\") pod \"kube-proxy-76b44\" (UID: \"dda608a5-c4e9-4a1b-b0eb-505a3f18f7ab\") " pod="kube-system/kube-proxy-76b44" Jun 25 14:15:29.188411 kubelet[2386]: I0625 14:15:29.188400 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w74k6\" (UniqueName: \"kubernetes.io/projected/dda608a5-c4e9-4a1b-b0eb-505a3f18f7ab-kube-api-access-w74k6\") pod \"kube-proxy-76b44\" (UID: \"dda608a5-c4e9-4a1b-b0eb-505a3f18f7ab\") " pod="kube-system/kube-proxy-76b44" Jun 25 14:15:29.330073 kubelet[2386]: I0625 14:15:29.330009 2386 topology_manager.go:215] "Topology Admit Handler" podUID="7f43ae35-7229-4847-aa91-f87ba9505181" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-8n5p6" Jun 25 14:15:29.389853 kubelet[2386]: I0625 14:15:29.389759 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f43ae35-7229-4847-aa91-f87ba9505181-var-lib-calico\") pod \"tigera-operator-76c4974c85-8n5p6\" (UID: \"7f43ae35-7229-4847-aa91-f87ba9505181\") " pod="tigera-operator/tigera-operator-76c4974c85-8n5p6" Jun 25 14:15:29.390046 kubelet[2386]: I0625 14:15:29.390033 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gt6b\" (UniqueName: \"kubernetes.io/projected/7f43ae35-7229-4847-aa91-f87ba9505181-kube-api-access-4gt6b\") pod \"tigera-operator-76c4974c85-8n5p6\" (UID: \"7f43ae35-7229-4847-aa91-f87ba9505181\") " pod="tigera-operator/tigera-operator-76c4974c85-8n5p6" Jun 25 14:15:29.390877 kubelet[2386]: E0625 14:15:29.390858 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:29.394624 containerd[1350]: time="2024-06-25T14:15:29.394567623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-76b44,Uid:dda608a5-c4e9-4a1b-b0eb-505a3f18f7ab,Namespace:kube-system,Attempt:0,}" Jun 25 14:15:29.414270 containerd[1350]: time="2024-06-25T14:15:29.413825755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:15:29.414270 containerd[1350]: time="2024-06-25T14:15:29.414259954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:29.414409 containerd[1350]: time="2024-06-25T14:15:29.414283234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:15:29.414409 containerd[1350]: time="2024-06-25T14:15:29.414295954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:29.453670 containerd[1350]: time="2024-06-25T14:15:29.453624617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-76b44,Uid:dda608a5-c4e9-4a1b-b0eb-505a3f18f7ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f5b9e1e75d4c3b78a3c4e6b9291e27955b5f12ae446047798c8ef662404e07b\"" Jun 25 14:15:29.454476 kubelet[2386]: E0625 14:15:29.454456 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:29.459877 containerd[1350]: time="2024-06-25T14:15:29.459499168Z" level=info msg="CreateContainer within sandbox \"5f5b9e1e75d4c3b78a3c4e6b9291e27955b5f12ae446047798c8ef662404e07b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 14:15:29.485416 containerd[1350]: time="2024-06-25T14:15:29.485354690Z" level=info msg="CreateContainer within sandbox \"5f5b9e1e75d4c3b78a3c4e6b9291e27955b5f12ae446047798c8ef662404e07b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef8214821d27a885bb0d73d2ebe23c5a791cb5d3dfcf46e11c4dbdea4b5852d4\"" Jun 25 14:15:29.487946 containerd[1350]: time="2024-06-25T14:15:29.487908607Z" level=info msg="StartContainer for \"ef8214821d27a885bb0d73d2ebe23c5a791cb5d3dfcf46e11c4dbdea4b5852d4\"" Jun 25 14:15:29.544855 containerd[1350]: time="2024-06-25T14:15:29.544795763Z" level=info msg="StartContainer for \"ef8214821d27a885bb0d73d2ebe23c5a791cb5d3dfcf46e11c4dbdea4b5852d4\" returns successfully" Jun 25 14:15:29.633707 containerd[1350]: time="2024-06-25T14:15:29.633661833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8n5p6,Uid:7f43ae35-7229-4847-aa91-f87ba9505181,Namespace:tigera-operator,Attempt:0,}" Jun 25 14:15:29.657619 containerd[1350]: time="2024-06-25T14:15:29.657453119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:15:29.657619 containerd[1350]: time="2024-06-25T14:15:29.657513519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:29.657619 containerd[1350]: time="2024-06-25T14:15:29.657531039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:15:29.657619 containerd[1350]: time="2024-06-25T14:15:29.657541159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:29.707345 kernel: audit: type=1325 audit(1719324929.698:228): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.707497 kernel: audit: type=1300 audit(1719324929.698:228): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff3127660 a2=0 a3=1 items=0 ppid=2550 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.707526 kernel: audit: type=1327 audit(1719324929.698:228): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:15:29.707562 kernel: audit: type=1325 audit(1719324929.699:229): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2628 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.707590 kernel: audit: type=1300 audit(1719324929.699:229): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea206830 a2=0 a3=1 items=0 ppid=2550 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.698000 audit[2627]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.698000 audit[2627]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff3127660 a2=0 a3=1 items=0 ppid=2550 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:15:29.699000 audit[2628]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2628 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.699000 audit[2628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea206830 a2=0 a3=1 items=0 ppid=2550 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.699000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:15:29.711222 kernel: audit: type=1327 audit(1719324929.699:229): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:15:29.700000 audit[2629]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.712851 kernel: audit: type=1325 audit(1719324929.700:230): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.700000 audit[2629]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe150adf0 a2=0 a3=1 items=0 ppid=2550 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.715807 kernel: audit: type=1300 audit(1719324929.700:230): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe150adf0 a2=0 a3=1 items=0 ppid=2550 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.715882 kernel: audit: type=1327 audit(1719324929.700:230): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:15:29.700000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:15:29.717082 kernel: audit: type=1325 audit(1719324929.701:231): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.701000 audit[2630]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.701000 audit[2630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffce3a4150 a2=0 a3=1 items=0 ppid=2550 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:15:29.709000 audit[2631]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.709000 audit[2631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9e4aa90 a2=0 a3=1 items=0 ppid=2550 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:15:29.710000 audit[2633]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2633 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.710000 audit[2633]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb6b6060 a2=0 a3=1 items=0 ppid=2550 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.710000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:15:29.719442 containerd[1350]: time="2024-06-25T14:15:29.719396188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8n5p6,Uid:7f43ae35-7229-4847-aa91-f87ba9505181,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"023b126d8f8af4b32f8eafa2a6b2a67c68c69d1af641e1fb4aad6e83c13a26a9\"" Jun 25 14:15:29.724519 containerd[1350]: time="2024-06-25T14:15:29.724481061Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 14:15:29.803000 audit[2639]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.803000 audit[2639]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff5c2ea40 a2=0 a3=1 items=0 ppid=2550 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.803000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:15:29.806000 audit[2641]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.806000 audit[2641]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcb76e640 a2=0 a3=1 items=0 ppid=2550 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 14:15:29.813000 audit[2644]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.813000 audit[2644]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd10b0d10 a2=0 a3=1 items=0 ppid=2550 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.813000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 14:15:29.815000 audit[2645]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.815000 audit[2645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbc68100 a2=0 a3=1 items=0 ppid=2550 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:15:29.818000 audit[2647]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.818000 audit[2647]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd8828390 a2=0 a3=1 items=0 ppid=2550 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:15:29.819000 audit[2648]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.819000 audit[2648]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7b79b50 a2=0 a3=1 items=0 ppid=2550 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:15:29.822000 audit[2650]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.822000 audit[2650]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcc51efc0 a2=0 a3=1 items=0 ppid=2550 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:15:29.826000 audit[2653]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.826000 audit[2653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc1b8aac0 a2=0 a3=1 items=0 ppid=2550 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 14:15:29.827000 audit[2654]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2654 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.827000 audit[2654]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf17aa60 a2=0 a3=1 items=0 ppid=2550 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.827000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:15:29.830000 audit[2656]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.830000 audit[2656]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffce4fd7f0 a2=0 a3=1 items=0 ppid=2550 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:15:29.832000 audit[2657]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.832000 audit[2657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2c38280 a2=0 a3=1 items=0 ppid=2550 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:15:29.835000 audit[2659]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.835000 audit[2659]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe4292d00 a2=0 a3=1 items=0 ppid=2550 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:15:29.839000 audit[2662]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2662 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.839000 audit[2662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcf720ed0 a2=0 a3=1 items=0 ppid=2550 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:15:29.845000 audit[2665]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.845000 audit[2665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffea400220 a2=0 a3=1 items=0 ppid=2550 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:15:29.846000 audit[2666]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.846000 audit[2666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1acc4e0 a2=0 a3=1 items=0 ppid=2550 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:15:29.850000 audit[2668]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.850000 audit[2668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcd5b6d30 a2=0 a3=1 items=0 ppid=2550 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:15:29.855000 audit[2671]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.855000 audit[2671]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffef09cc60 a2=0 a3=1 items=0 ppid=2550 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:15:29.859000 audit[2672]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.859000 audit[2672]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc032f520 a2=0 a3=1 items=0 ppid=2550 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:15:29.862000 audit[2674]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2674 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:15:29.862000 audit[2674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe81fa6e0 a2=0 a3=1 items=0 ppid=2550 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:15:29.875467 kubelet[2386]: E0625 14:15:29.875138 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:29.892000 audit[2680]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:29.892000 audit[2680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffce7a0a50 a2=0 a3=1 items=0 ppid=2550 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:29.905000 audit[2680]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:29.905000 audit[2680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffce7a0a50 a2=0 a3=1 items=0 ppid=2550 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:29.907000 audit[2684]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2684 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.907000 audit[2684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd916fa40 a2=0 a3=1 items=0 ppid=2550 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:15:29.910000 audit[2686]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2686 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.910000 audit[2686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff5d997b0 a2=0 a3=1 items=0 ppid=2550 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.910000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 14:15:29.915000 audit[2689]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.915000 audit[2689]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffffbacf00 a2=0 a3=1 items=0 ppid=2550 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 14:15:29.916000 audit[2690]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2690 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.916000 audit[2690]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff311b90 a2=0 a3=1 items=0 ppid=2550 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:15:29.919000 audit[2692]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2692 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.919000 audit[2692]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe511b4c0 a2=0 a3=1 items=0 ppid=2550 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:15:29.920000 audit[2693]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2693 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.920000 audit[2693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7e21f70 a2=0 a3=1 items=0 ppid=2550 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.920000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:15:29.924000 audit[2695]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2695 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.924000 audit[2695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcf50d6a0 a2=0 a3=1 items=0 ppid=2550 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 14:15:29.932000 audit[2698]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2698 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.932000 audit[2698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe3593dd0 a2=0 a3=1 items=0 ppid=2550 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:15:29.933000 audit[2699]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2699 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.933000 audit[2699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0da1870 a2=0 a3=1 items=0 ppid=2550 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:15:29.936000 audit[2701]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2701 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.936000 audit[2701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdc3eff70 a2=0 a3=1 items=0 ppid=2550 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:15:29.937000 audit[2702]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2702 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.937000 audit[2702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9599c30 a2=0 a3=1 items=0 ppid=2550 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:15:29.940000 audit[2704]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2704 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.940000 audit[2704]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffca24610 a2=0 a3=1 items=0 ppid=2550 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:15:29.944000 audit[2707]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2707 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.944000 audit[2707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc5531190 a2=0 a3=1 items=0 ppid=2550 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:15:29.948000 audit[2710]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2710 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.948000 audit[2710]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff5297100 a2=0 a3=1 items=0 ppid=2550 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 14:15:29.949000 audit[2711]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2711 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.949000 audit[2711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffccb7a360 a2=0 a3=1 items=0 ppid=2550 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:15:29.952000 audit[2713]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2713 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.952000 audit[2713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffef624ec0 a2=0 a3=1 items=0 ppid=2550 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:15:29.957000 audit[2716]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2716 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.957000 audit[2716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff58ca020 a2=0 a3=1 items=0 ppid=2550 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:15:29.958000 audit[2717]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2717 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.958000 audit[2717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff02b8da0 a2=0 a3=1 items=0 ppid=2550 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:15:29.961000 audit[2719]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2719 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.961000 audit[2719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff00c17b0 a2=0 a3=1 items=0 ppid=2550 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:15:29.962000 audit[2720]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.962000 audit[2720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9a8b600 a2=0 a3=1 items=0 ppid=2550 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.962000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:15:29.964000 audit[2722]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.964000 audit[2722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc9bdf280 a2=0 a3=1 items=0 ppid=2550 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.964000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:15:29.968000 audit[2725]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:15:29.968000 audit[2725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff35890c0 a2=0 a3=1 items=0 ppid=2550 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.968000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:15:29.971000 audit[2727]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:15:29.971000 audit[2727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffd0e90ba0 a2=0 a3=1 items=0 ppid=2550 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.971000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:29.972000 audit[2727]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:15:29.972000 audit[2727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd0e90ba0 a2=0 a3=1 items=0 ppid=2550 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:29.972000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:30.677849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503678875.mount: Deactivated successfully. Jun 25 14:15:31.004030 containerd[1350]: time="2024-06-25T14:15:31.003974922Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:31.004930 containerd[1350]: time="2024-06-25T14:15:31.004881961Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473622" Jun 25 14:15:31.005819 containerd[1350]: time="2024-06-25T14:15:31.005790920Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:31.014695 containerd[1350]: time="2024-06-25T14:15:31.014635509Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:31.022414 containerd[1350]: time="2024-06-25T14:15:31.022370219Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:31.023949 containerd[1350]: time="2024-06-25T14:15:31.023909497Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.299379316s" Jun 25 14:15:31.024044 containerd[1350]: time="2024-06-25T14:15:31.023949937Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 14:15:31.029144 containerd[1350]: time="2024-06-25T14:15:31.029091090Z" level=info msg="CreateContainer within sandbox \"023b126d8f8af4b32f8eafa2a6b2a67c68c69d1af641e1fb4aad6e83c13a26a9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 14:15:31.039137 containerd[1350]: time="2024-06-25T14:15:31.039079397Z" level=info msg="CreateContainer within sandbox \"023b126d8f8af4b32f8eafa2a6b2a67c68c69d1af641e1fb4aad6e83c13a26a9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"600da558ebe5582f830e21b2dc29159baf326103a5f0eed9e9d3488645e894ac\"" Jun 25 14:15:31.041301 containerd[1350]: time="2024-06-25T14:15:31.041250954Z" level=info msg="StartContainer for \"600da558ebe5582f830e21b2dc29159baf326103a5f0eed9e9d3488645e894ac\"" Jun 25 14:15:31.104798 containerd[1350]: time="2024-06-25T14:15:31.104739273Z" level=info msg="StartContainer for \"600da558ebe5582f830e21b2dc29159baf326103a5f0eed9e9d3488645e894ac\" returns successfully" Jun 25 14:15:31.901759 kubelet[2386]: I0625 14:15:31.901691 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-76b44" podStartSLOduration=2.901652889 podCreationTimestamp="2024-06-25 14:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:15:29.884641507 +0000 UTC m=+14.165955573" watchObservedRunningTime="2024-06-25 14:15:31.901652889 +0000 UTC m=+16.182966995" Jun 25 14:15:31.902192 kubelet[2386]: I0625 14:15:31.901778 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-8n5p6" podStartSLOduration=1.595688822 podCreationTimestamp="2024-06-25 14:15:29 +0000 UTC" firstStartedPulling="2024-06-25 14:15:29.720710146 +0000 UTC m=+14.002024172" lastFinishedPulling="2024-06-25 14:15:31.026782813 +0000 UTC m=+15.308096839" observedRunningTime="2024-06-25 14:15:31.901558329 +0000 UTC m=+16.182872395" watchObservedRunningTime="2024-06-25 14:15:31.901761489 +0000 UTC m=+16.183075515" Jun 25 14:15:34.876000 audit[2778]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:34.880045 kernel: kauditd_printk_skb: 143 callbacks suppressed Jun 25 14:15:34.880101 kernel: audit: type=1325 audit(1719324934.876:279): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:34.880124 kernel: audit: type=1300 audit(1719324934.876:279): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe0565560 a2=0 a3=1 items=0 ppid=2550 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:34.876000 audit[2778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe0565560 a2=0 a3=1 items=0 ppid=2550 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:34.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:34.884193 kernel: audit: type=1327 audit(1719324934.876:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:34.877000 audit[2778]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:34.877000 audit[2778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe0565560 a2=0 a3=1 items=0 ppid=2550 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:34.889170 kernel: audit: type=1325 audit(1719324934.877:280): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:34.889256 kernel: audit: type=1300 audit(1719324934.877:280): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe0565560 a2=0 a3=1 items=0 ppid=2550 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:34.889281 kernel: audit: type=1327 audit(1719324934.877:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:34.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:34.888000 audit[2780]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:34.888000 audit[2780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffec609ff0 a2=0 a3=1 items=0 ppid=2550 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:34.895531 kernel: audit: type=1325 audit(1719324934.888:281): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:34.895588 kernel: audit: type=1300 audit(1719324934.888:281): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffec609ff0 a2=0 a3=1 items=0 ppid=2550 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:34.895607 kernel: audit: type=1327 audit(1719324934.888:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:34.888000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:34.891000 audit[2780]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:34.891000 audit[2780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffec609ff0 a2=0 a3=1 items=0 ppid=2550 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:34.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:34.899915 kernel: audit: type=1325 audit(1719324934.891:282): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:35.007058 kubelet[2386]: I0625 14:15:35.007015 2386 topology_manager.go:215] "Topology Admit Handler" podUID="b9a15f95-438f-47a4-a1ea-03010c323249" podNamespace="calico-system" podName="calico-typha-79bb454f64-cblmq" Jun 25 14:15:35.027582 kubelet[2386]: I0625 14:15:35.027546 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b9a15f95-438f-47a4-a1ea-03010c323249-typha-certs\") pod \"calico-typha-79bb454f64-cblmq\" (UID: \"b9a15f95-438f-47a4-a1ea-03010c323249\") " pod="calico-system/calico-typha-79bb454f64-cblmq" Jun 25 14:15:35.027582 kubelet[2386]: I0625 14:15:35.027591 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95vrl\" (UniqueName: \"kubernetes.io/projected/b9a15f95-438f-47a4-a1ea-03010c323249-kube-api-access-95vrl\") pod \"calico-typha-79bb454f64-cblmq\" (UID: \"b9a15f95-438f-47a4-a1ea-03010c323249\") " pod="calico-system/calico-typha-79bb454f64-cblmq" Jun 25 14:15:35.027755 kubelet[2386]: I0625 14:15:35.027619 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9a15f95-438f-47a4-a1ea-03010c323249-tigera-ca-bundle\") pod \"calico-typha-79bb454f64-cblmq\" (UID: \"b9a15f95-438f-47a4-a1ea-03010c323249\") " pod="calico-system/calico-typha-79bb454f64-cblmq" Jun 25 14:15:35.053563 kubelet[2386]: I0625 14:15:35.053531 2386 topology_manager.go:215] "Topology Admit Handler" podUID="49ec256b-ce2d-4c39-96cb-ffd0d59ae64f" podNamespace="calico-system" podName="calico-node-pqgng" Jun 25 14:15:35.128527 kubelet[2386]: I0625 14:15:35.128424 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-tigera-ca-bundle\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.128527 kubelet[2386]: I0625 14:15:35.128465 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-node-certs\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.128527 kubelet[2386]: I0625 14:15:35.128489 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-cni-bin-dir\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.128527 kubelet[2386]: I0625 14:15:35.128510 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-xtables-lock\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.128527 kubelet[2386]: I0625 14:15:35.128531 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-225p7\" (UniqueName: \"kubernetes.io/projected/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-kube-api-access-225p7\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.128742 kubelet[2386]: I0625 14:15:35.128551 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-lib-modules\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.128742 kubelet[2386]: I0625 14:15:35.128574 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-flexvol-driver-host\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.128742 kubelet[2386]: I0625 14:15:35.128611 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-policysync\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.128742 kubelet[2386]: I0625 14:15:35.128631 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-cni-log-dir\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.128742 kubelet[2386]: I0625 14:15:35.128680 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-var-run-calico\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.128846 kubelet[2386]: I0625 14:15:35.128699 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-var-lib-calico\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.128846 kubelet[2386]: I0625 14:15:35.128747 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/49ec256b-ce2d-4c39-96cb-ffd0d59ae64f-cni-net-dir\") pod \"calico-node-pqgng\" (UID: \"49ec256b-ce2d-4c39-96cb-ffd0d59ae64f\") " pod="calico-system/calico-node-pqgng" Jun 25 14:15:35.169429 kubelet[2386]: I0625 14:15:35.169393 2386 topology_manager.go:215] "Topology Admit Handler" podUID="ab547801-7d4b-41c4-b3b9-81712e462073" podNamespace="calico-system" podName="csi-node-driver-v8lrw" Jun 25 14:15:35.169821 kubelet[2386]: E0625 14:15:35.169793 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8lrw" podUID="ab547801-7d4b-41c4-b3b9-81712e462073" Jun 25 14:15:35.229460 kubelet[2386]: I0625 14:15:35.229420 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ab547801-7d4b-41c4-b3b9-81712e462073-varrun\") pod \"csi-node-driver-v8lrw\" (UID: \"ab547801-7d4b-41c4-b3b9-81712e462073\") " pod="calico-system/csi-node-driver-v8lrw" Jun 25 14:15:35.229655 kubelet[2386]: I0625 14:15:35.229642 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab547801-7d4b-41c4-b3b9-81712e462073-kubelet-dir\") pod \"csi-node-driver-v8lrw\" (UID: \"ab547801-7d4b-41c4-b3b9-81712e462073\") " pod="calico-system/csi-node-driver-v8lrw" Jun 25 14:15:35.229875 kubelet[2386]: I0625 14:15:35.229846 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ab547801-7d4b-41c4-b3b9-81712e462073-socket-dir\") pod \"csi-node-driver-v8lrw\" (UID: \"ab547801-7d4b-41c4-b3b9-81712e462073\") " pod="calico-system/csi-node-driver-v8lrw" Jun 25 14:15:35.230220 kubelet[2386]: I0625 14:15:35.230183 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ab547801-7d4b-41c4-b3b9-81712e462073-registration-dir\") pod \"csi-node-driver-v8lrw\" (UID: \"ab547801-7d4b-41c4-b3b9-81712e462073\") " pod="calico-system/csi-node-driver-v8lrw" Jun 25 14:15:35.230291 kubelet[2386]: I0625 14:15:35.230277 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6bcx\" (UniqueName: \"kubernetes.io/projected/ab547801-7d4b-41c4-b3b9-81712e462073-kube-api-access-l6bcx\") pod \"csi-node-driver-v8lrw\" (UID: \"ab547801-7d4b-41c4-b3b9-81712e462073\") " pod="calico-system/csi-node-driver-v8lrw" Jun 25 14:15:35.253501 kubelet[2386]: E0625 14:15:35.253472 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.253664 kubelet[2386]: W0625 14:15:35.253647 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.253746 kubelet[2386]: E0625 14:15:35.253733 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.311455 kubelet[2386]: E0625 14:15:35.311414 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:35.313146 containerd[1350]: time="2024-06-25T14:15:35.313104619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79bb454f64-cblmq,Uid:b9a15f95-438f-47a4-a1ea-03010c323249,Namespace:calico-system,Attempt:0,}" Jun 25 14:15:35.331566 kubelet[2386]: E0625 14:15:35.331534 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.331566 kubelet[2386]: W0625 14:15:35.331557 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.331719 kubelet[2386]: E0625 14:15:35.331579 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.331765 kubelet[2386]: E0625 14:15:35.331748 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.331765 kubelet[2386]: W0625 14:15:35.331761 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.331821 kubelet[2386]: E0625 14:15:35.331773 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.331943 kubelet[2386]: E0625 14:15:35.331926 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.331943 kubelet[2386]: W0625 14:15:35.331938 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.332015 kubelet[2386]: E0625 14:15:35.331949 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.332173 kubelet[2386]: E0625 14:15:35.332158 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.332173 kubelet[2386]: W0625 14:15:35.332169 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.332262 kubelet[2386]: E0625 14:15:35.332186 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.332508 kubelet[2386]: E0625 14:15:35.332489 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.332508 kubelet[2386]: W0625 14:15:35.332505 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.332594 kubelet[2386]: E0625 14:15:35.332528 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.332793 kubelet[2386]: E0625 14:15:35.332736 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.332793 kubelet[2386]: W0625 14:15:35.332791 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.332885 kubelet[2386]: E0625 14:15:35.332811 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.333106 kubelet[2386]: E0625 14:15:35.332999 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.333106 kubelet[2386]: W0625 14:15:35.333015 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.333106 kubelet[2386]: E0625 14:15:35.333027 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.333487 kubelet[2386]: E0625 14:15:35.333308 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.333487 kubelet[2386]: W0625 14:15:35.333322 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.333487 kubelet[2386]: E0625 14:15:35.333363 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.333745 kubelet[2386]: E0625 14:15:35.333653 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.333745 kubelet[2386]: W0625 14:15:35.333665 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.333745 kubelet[2386]: E0625 14:15:35.333719 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.334005 kubelet[2386]: E0625 14:15:35.333915 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.334005 kubelet[2386]: W0625 14:15:35.333928 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.334005 kubelet[2386]: E0625 14:15:35.333973 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.334264 kubelet[2386]: E0625 14:15:35.334164 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.334264 kubelet[2386]: W0625 14:15:35.334176 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.334264 kubelet[2386]: E0625 14:15:35.334242 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.335314 kubelet[2386]: E0625 14:15:35.334511 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.335314 kubelet[2386]: W0625 14:15:35.334528 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.335314 kubelet[2386]: E0625 14:15:35.335289 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.337167 kubelet[2386]: E0625 14:15:35.335489 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.337167 kubelet[2386]: W0625 14:15:35.335504 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.337167 kubelet[2386]: E0625 14:15:35.335592 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.337167 kubelet[2386]: E0625 14:15:35.335692 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.337167 kubelet[2386]: W0625 14:15:35.335699 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.337167 kubelet[2386]: E0625 14:15:35.335734 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.337167 kubelet[2386]: E0625 14:15:35.335857 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.337167 kubelet[2386]: W0625 14:15:35.335866 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.337167 kubelet[2386]: E0625 14:15:35.335904 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.337167 kubelet[2386]: E0625 14:15:35.336041 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.337447 kubelet[2386]: W0625 14:15:35.336050 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.337447 kubelet[2386]: E0625 14:15:35.336071 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.337447 kubelet[2386]: E0625 14:15:35.336992 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.337447 kubelet[2386]: W0625 14:15:35.337015 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.337447 kubelet[2386]: E0625 14:15:35.337036 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.338179 kubelet[2386]: E0625 14:15:35.338074 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.338179 kubelet[2386]: W0625 14:15:35.338088 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.338179 kubelet[2386]: E0625 14:15:35.338168 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.338630 kubelet[2386]: E0625 14:15:35.338346 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.338630 kubelet[2386]: W0625 14:15:35.338357 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.338630 kubelet[2386]: E0625 14:15:35.338449 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.338630 kubelet[2386]: E0625 14:15:35.338515 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.338630 kubelet[2386]: W0625 14:15:35.338523 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.338630 kubelet[2386]: E0625 14:15:35.338594 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.338987 kubelet[2386]: E0625 14:15:35.338830 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.338987 kubelet[2386]: W0625 14:15:35.338839 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.338987 kubelet[2386]: E0625 14:15:35.338852 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.339149 kubelet[2386]: E0625 14:15:35.339137 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.339205 kubelet[2386]: W0625 14:15:35.339194 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.339271 kubelet[2386]: E0625 14:15:35.339261 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.339907 kubelet[2386]: E0625 14:15:35.339873 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.340019 kubelet[2386]: W0625 14:15:35.339999 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.340084 kubelet[2386]: E0625 14:15:35.340074 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.343725 containerd[1350]: time="2024-06-25T14:15:35.339626433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:15:35.343725 containerd[1350]: time="2024-06-25T14:15:35.339678913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:35.343725 containerd[1350]: time="2024-06-25T14:15:35.339697433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:15:35.343725 containerd[1350]: time="2024-06-25T14:15:35.339707033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:35.343979 kubelet[2386]: E0625 14:15:35.343963 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.344056 kubelet[2386]: W0625 14:15:35.344042 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.344127 kubelet[2386]: E0625 14:15:35.344117 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.344527 kubelet[2386]: E0625 14:15:35.344498 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.344527 kubelet[2386]: W0625 14:15:35.344515 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.344527 kubelet[2386]: E0625 14:15:35.344530 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.347551 kubelet[2386]: E0625 14:15:35.347536 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:35.347656 kubelet[2386]: W0625 14:15:35.347639 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:35.347715 kubelet[2386]: E0625 14:15:35.347705 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:35.359972 kubelet[2386]: E0625 14:15:35.357167 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:35.360278 containerd[1350]: time="2024-06-25T14:15:35.357774855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pqgng,Uid:49ec256b-ce2d-4c39-96cb-ffd0d59ae64f,Namespace:calico-system,Attempt:0,}" Jun 25 14:15:35.384333 containerd[1350]: time="2024-06-25T14:15:35.381348712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:15:35.384333 containerd[1350]: time="2024-06-25T14:15:35.381397112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:35.384333 containerd[1350]: time="2024-06-25T14:15:35.381414432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:15:35.384333 containerd[1350]: time="2024-06-25T14:15:35.381423752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:35.407933 containerd[1350]: time="2024-06-25T14:15:35.407877285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79bb454f64-cblmq,Uid:b9a15f95-438f-47a4-a1ea-03010c323249,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffe1c15744dbdad993fd85811a96a9890928f4879070ca0ca4729101c918fff0\"" Jun 25 14:15:35.410724 kubelet[2386]: E0625 14:15:35.410241 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:35.413182 containerd[1350]: time="2024-06-25T14:15:35.412791680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 14:15:35.441097 containerd[1350]: time="2024-06-25T14:15:35.441053932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pqgng,Uid:49ec256b-ce2d-4c39-96cb-ffd0d59ae64f,Namespace:calico-system,Attempt:0,} returns sandbox id \"aefa507dbe99c027c43cd5d101e11760152dfd20705d09b33cfca3bd061a9964\"" Jun 25 14:15:35.442674 kubelet[2386]: E0625 14:15:35.442503 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:35.904000 audit[2904]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:35.904000 audit[2904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffd397c240 a2=0 a3=1 items=0 ppid=2550 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:35.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:35.905000 audit[2904]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:35.905000 audit[2904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd397c240 a2=0 a3=1 items=0 ppid=2550 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:35.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:36.693422 containerd[1350]: time="2024-06-25T14:15:36.693373532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:36.694277 containerd[1350]: time="2024-06-25T14:15:36.694237972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 14:15:36.702310 containerd[1350]: time="2024-06-25T14:15:36.702261924Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:36.703859 containerd[1350]: time="2024-06-25T14:15:36.703828563Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:36.705855 containerd[1350]: time="2024-06-25T14:15:36.705814521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:36.706567 containerd[1350]: time="2024-06-25T14:15:36.706534200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 1.29369892s" Jun 25 14:15:36.706618 containerd[1350]: time="2024-06-25T14:15:36.706569040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 14:15:36.707195 containerd[1350]: time="2024-06-25T14:15:36.707160080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 14:15:36.715779 containerd[1350]: time="2024-06-25T14:15:36.715738392Z" level=info msg="CreateContainer within sandbox \"ffe1c15744dbdad993fd85811a96a9890928f4879070ca0ca4729101c918fff0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 14:15:36.729848 containerd[1350]: time="2024-06-25T14:15:36.729797299Z" level=info msg="CreateContainer within sandbox \"ffe1c15744dbdad993fd85811a96a9890928f4879070ca0ca4729101c918fff0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2889fb19c0d99ae37280de26b1e69993b9d93634b0874d3810c425a6c7d7bbff\"" Jun 25 14:15:36.730543 containerd[1350]: time="2024-06-25T14:15:36.730453418Z" level=info msg="StartContainer for \"2889fb19c0d99ae37280de26b1e69993b9d93634b0874d3810c425a6c7d7bbff\"" Jun 25 14:15:36.787842 containerd[1350]: time="2024-06-25T14:15:36.787763725Z" level=info msg="StartContainer for \"2889fb19c0d99ae37280de26b1e69993b9d93634b0874d3810c425a6c7d7bbff\" returns successfully" Jun 25 14:15:36.840056 kubelet[2386]: E0625 14:15:36.840021 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8lrw" podUID="ab547801-7d4b-41c4-b3b9-81712e462073" Jun 25 14:15:36.899066 kubelet[2386]: E0625 14:15:36.899033 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:36.937063 kubelet[2386]: E0625 14:15:36.937037 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.937063 kubelet[2386]: W0625 14:15:36.937057 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.937215 kubelet[2386]: E0625 14:15:36.937078 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.937297 kubelet[2386]: E0625 14:15:36.937281 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.937297 kubelet[2386]: W0625 14:15:36.937294 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.937370 kubelet[2386]: E0625 14:15:36.937306 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.937465 kubelet[2386]: E0625 14:15:36.937452 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.937501 kubelet[2386]: W0625 14:15:36.937466 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.937501 kubelet[2386]: E0625 14:15:36.937476 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.937614 kubelet[2386]: E0625 14:15:36.937604 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.937614 kubelet[2386]: W0625 14:15:36.937613 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.937675 kubelet[2386]: E0625 14:15:36.937624 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.937784 kubelet[2386]: E0625 14:15:36.937767 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.937784 kubelet[2386]: W0625 14:15:36.937778 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.937838 kubelet[2386]: E0625 14:15:36.937789 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.938043 kubelet[2386]: E0625 14:15:36.938028 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.938100 kubelet[2386]: W0625 14:15:36.938066 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.938100 kubelet[2386]: E0625 14:15:36.938083 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.938314 kubelet[2386]: E0625 14:15:36.938293 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.938383 kubelet[2386]: W0625 14:15:36.938364 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.938383 kubelet[2386]: E0625 14:15:36.938381 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.938572 kubelet[2386]: E0625 14:15:36.938559 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.938572 kubelet[2386]: W0625 14:15:36.938571 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.938646 kubelet[2386]: E0625 14:15:36.938584 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.938747 kubelet[2386]: E0625 14:15:36.938736 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.938747 kubelet[2386]: W0625 14:15:36.938746 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.938817 kubelet[2386]: E0625 14:15:36.938755 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.938957 kubelet[2386]: E0625 14:15:36.938943 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.938957 kubelet[2386]: W0625 14:15:36.938956 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.939039 kubelet[2386]: E0625 14:15:36.938966 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.939113 kubelet[2386]: E0625 14:15:36.939098 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.939113 kubelet[2386]: W0625 14:15:36.939111 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.939178 kubelet[2386]: E0625 14:15:36.939122 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.939279 kubelet[2386]: E0625 14:15:36.939265 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.939317 kubelet[2386]: W0625 14:15:36.939281 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.939317 kubelet[2386]: E0625 14:15:36.939294 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.939448 kubelet[2386]: E0625 14:15:36.939433 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.939448 kubelet[2386]: W0625 14:15:36.939445 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.939508 kubelet[2386]: E0625 14:15:36.939455 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.939597 kubelet[2386]: E0625 14:15:36.939584 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.939597 kubelet[2386]: W0625 14:15:36.939594 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.939668 kubelet[2386]: E0625 14:15:36.939604 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.939744 kubelet[2386]: E0625 14:15:36.939733 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.939744 kubelet[2386]: W0625 14:15:36.939744 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.939813 kubelet[2386]: E0625 14:15:36.939753 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.947273 kubelet[2386]: E0625 14:15:36.947170 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.947273 kubelet[2386]: W0625 14:15:36.947190 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.947273 kubelet[2386]: E0625 14:15:36.947216 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.949741 kubelet[2386]: E0625 14:15:36.949718 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.949741 kubelet[2386]: W0625 14:15:36.949735 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.949862 kubelet[2386]: E0625 14:15:36.949756 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.950086 kubelet[2386]: E0625 14:15:36.949977 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.950086 kubelet[2386]: W0625 14:15:36.949990 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.950086 kubelet[2386]: E0625 14:15:36.950022 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.950191 kubelet[2386]: E0625 14:15:36.950139 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.950191 kubelet[2386]: W0625 14:15:36.950145 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.950191 kubelet[2386]: E0625 14:15:36.950160 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.950321 kubelet[2386]: E0625 14:15:36.950307 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.950321 kubelet[2386]: W0625 14:15:36.950316 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.950382 kubelet[2386]: E0625 14:15:36.950330 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.950466 kubelet[2386]: E0625 14:15:36.950454 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.950466 kubelet[2386]: W0625 14:15:36.950463 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.950530 kubelet[2386]: E0625 14:15:36.950477 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.950753 kubelet[2386]: E0625 14:15:36.950740 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.950800 kubelet[2386]: W0625 14:15:36.950769 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.950836 kubelet[2386]: E0625 14:15:36.950824 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.951082 kubelet[2386]: E0625 14:15:36.951065 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.951082 kubelet[2386]: W0625 14:15:36.951081 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.951165 kubelet[2386]: E0625 14:15:36.951101 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.951308 kubelet[2386]: E0625 14:15:36.951294 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.951353 kubelet[2386]: W0625 14:15:36.951308 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.951380 kubelet[2386]: E0625 14:15:36.951358 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.951521 kubelet[2386]: E0625 14:15:36.951508 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.951561 kubelet[2386]: W0625 14:15:36.951521 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.951561 kubelet[2386]: E0625 14:15:36.951540 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.951768 kubelet[2386]: E0625 14:15:36.951741 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.951768 kubelet[2386]: W0625 14:15:36.951759 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.951842 kubelet[2386]: E0625 14:15:36.951774 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.951973 kubelet[2386]: E0625 14:15:36.951958 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.951973 kubelet[2386]: W0625 14:15:36.951969 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.952050 kubelet[2386]: E0625 14:15:36.951985 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.952170 kubelet[2386]: E0625 14:15:36.952153 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.952170 kubelet[2386]: W0625 14:15:36.952170 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.952264 kubelet[2386]: E0625 14:15:36.952186 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.952605 kubelet[2386]: E0625 14:15:36.952481 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.952605 kubelet[2386]: W0625 14:15:36.952498 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.952605 kubelet[2386]: E0625 14:15:36.952520 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.952885 kubelet[2386]: E0625 14:15:36.952761 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.952885 kubelet[2386]: W0625 14:15:36.952773 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.952885 kubelet[2386]: E0625 14:15:36.952792 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.953282 kubelet[2386]: E0625 14:15:36.953065 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.953282 kubelet[2386]: W0625 14:15:36.953077 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.953282 kubelet[2386]: E0625 14:15:36.953095 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.953389 kubelet[2386]: E0625 14:15:36.953337 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.953389 kubelet[2386]: W0625 14:15:36.953348 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.953389 kubelet[2386]: E0625 14:15:36.953366 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:36.953535 kubelet[2386]: E0625 14:15:36.953521 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:15:36.953535 kubelet[2386]: W0625 14:15:36.953531 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:15:36.953595 kubelet[2386]: E0625 14:15:36.953541 2386 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:15:37.595418 containerd[1350]: time="2024-06-25T14:15:37.595347768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:37.595861 containerd[1350]: time="2024-06-25T14:15:37.595812927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 14:15:37.597396 containerd[1350]: time="2024-06-25T14:15:37.597363486Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:37.598792 containerd[1350]: time="2024-06-25T14:15:37.598737685Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:37.633505 containerd[1350]: time="2024-06-25T14:15:37.633433135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:37.635367 containerd[1350]: time="2024-06-25T14:15:37.635300813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 927.980493ms" Jun 25 14:15:37.635367 containerd[1350]: time="2024-06-25T14:15:37.635369053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 14:15:37.638799 containerd[1350]: time="2024-06-25T14:15:37.638753930Z" level=info msg="CreateContainer within sandbox \"aefa507dbe99c027c43cd5d101e11760152dfd20705d09b33cfca3bd061a9964\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 14:15:37.651949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount739957960.mount: Deactivated successfully. Jun 25 14:15:37.689666 containerd[1350]: time="2024-06-25T14:15:37.689607606Z" level=info msg="CreateContainer within sandbox \"aefa507dbe99c027c43cd5d101e11760152dfd20705d09b33cfca3bd061a9964\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eb41873ac62a8adaf91f9c23c0b54134a2a303e254c450d154b847b5fa1d163f\"" Jun 25 14:15:37.691849 containerd[1350]: time="2024-06-25T14:15:37.691806004Z" level=info msg="StartContainer for \"eb41873ac62a8adaf91f9c23c0b54134a2a303e254c450d154b847b5fa1d163f\"" Jun 25 14:15:37.749836 containerd[1350]: time="2024-06-25T14:15:37.749783513Z" level=info msg="StartContainer for \"eb41873ac62a8adaf91f9c23c0b54134a2a303e254c450d154b847b5fa1d163f\" returns successfully" Jun 25 14:15:37.802501 containerd[1350]: time="2024-06-25T14:15:37.801342788Z" level=info msg="shim disconnected" id=eb41873ac62a8adaf91f9c23c0b54134a2a303e254c450d154b847b5fa1d163f namespace=k8s.io Jun 25 14:15:37.802501 containerd[1350]: time="2024-06-25T14:15:37.801408068Z" level=warning msg="cleaning up after shim disconnected" id=eb41873ac62a8adaf91f9c23c0b54134a2a303e254c450d154b847b5fa1d163f namespace=k8s.io Jun 25 14:15:37.802501 containerd[1350]: time="2024-06-25T14:15:37.801416628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:15:37.816180 containerd[1350]: time="2024-06-25T14:15:37.815144176Z" level=warning msg="cleanup warnings time=\"2024-06-25T14:15:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 14:15:37.902167 kubelet[2386]: I0625 14:15:37.902071 2386 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:15:37.902766 kubelet[2386]: E0625 14:15:37.902692 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:37.902972 kubelet[2386]: E0625 14:15:37.902954 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:37.904794 containerd[1350]: time="2024-06-25T14:15:37.904759818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 14:15:37.917567 kubelet[2386]: I0625 14:15:37.917539 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-79bb454f64-cblmq" podStartSLOduration=2.622885448 podCreationTimestamp="2024-06-25 14:15:34 +0000 UTC" firstStartedPulling="2024-06-25 14:15:35.412253721 +0000 UTC m=+19.693567707" lastFinishedPulling="2024-06-25 14:15:36.70686568 +0000 UTC m=+20.988179786" observedRunningTime="2024-06-25 14:15:36.91080989 +0000 UTC m=+21.192123916" watchObservedRunningTime="2024-06-25 14:15:37.917497527 +0000 UTC m=+22.198811513" Jun 25 14:15:38.139723 systemd[1]: run-containerd-runc-k8s.io-eb41873ac62a8adaf91f9c23c0b54134a2a303e254c450d154b847b5fa1d163f-runc.yLyT8a.mount: Deactivated successfully. Jun 25 14:15:38.139866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb41873ac62a8adaf91f9c23c0b54134a2a303e254c450d154b847b5fa1d163f-rootfs.mount: Deactivated successfully. Jun 25 14:15:38.840311 kubelet[2386]: E0625 14:15:38.840264 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8lrw" podUID="ab547801-7d4b-41c4-b3b9-81712e462073" Jun 25 14:15:39.399632 kubelet[2386]: I0625 14:15:39.397075 2386 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:15:39.399632 kubelet[2386]: E0625 14:15:39.398268 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:39.422000 audit[3061]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:39.422000 audit[3061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffff37cc460 a2=0 a3=1 items=0 ppid=2550 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:39.422000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:39.423000 audit[3061]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:39.423000 audit[3061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff37cc460 a2=0 a3=1 items=0 ppid=2550 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:39.423000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:39.906418 kubelet[2386]: E0625 14:15:39.906384 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:40.341028 containerd[1350]: time="2024-06-25T14:15:40.340978385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:40.341530 containerd[1350]: time="2024-06-25T14:15:40.341506985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 14:15:40.342674 containerd[1350]: time="2024-06-25T14:15:40.342634904Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:40.344569 containerd[1350]: time="2024-06-25T14:15:40.344537583Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:40.345907 containerd[1350]: time="2024-06-25T14:15:40.345865862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:40.346824 containerd[1350]: time="2024-06-25T14:15:40.346783341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 2.441977403s" Jun 25 14:15:40.346921 containerd[1350]: time="2024-06-25T14:15:40.346823901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 14:15:40.348683 containerd[1350]: time="2024-06-25T14:15:40.348644620Z" level=info msg="CreateContainer within sandbox \"aefa507dbe99c027c43cd5d101e11760152dfd20705d09b33cfca3bd061a9964\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 14:15:40.360257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958335925.mount: Deactivated successfully. Jun 25 14:15:40.365042 containerd[1350]: time="2024-06-25T14:15:40.364994528Z" level=info msg="CreateContainer within sandbox \"aefa507dbe99c027c43cd5d101e11760152dfd20705d09b33cfca3bd061a9964\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"04cfa9fd033efd607bea9b8f886e8a3356ae4ebcf8aedf4a15733861b68814fc\"" Jun 25 14:15:40.365806 containerd[1350]: time="2024-06-25T14:15:40.365777487Z" level=info msg="StartContainer for \"04cfa9fd033efd607bea9b8f886e8a3356ae4ebcf8aedf4a15733861b68814fc\"" Jun 25 14:15:40.469958 containerd[1350]: time="2024-06-25T14:15:40.469887493Z" level=info msg="StartContainer for \"04cfa9fd033efd607bea9b8f886e8a3356ae4ebcf8aedf4a15733861b68814fc\" returns successfully" Jun 25 14:15:40.840664 kubelet[2386]: E0625 14:15:40.840614 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8lrw" podUID="ab547801-7d4b-41c4-b3b9-81712e462073" Jun 25 14:15:40.909111 kubelet[2386]: E0625 14:15:40.909082 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:40.912542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04cfa9fd033efd607bea9b8f886e8a3356ae4ebcf8aedf4a15733861b68814fc-rootfs.mount: Deactivated successfully. Jun 25 14:15:40.916290 containerd[1350]: time="2024-06-25T14:15:40.916237252Z" level=info msg="shim disconnected" id=04cfa9fd033efd607bea9b8f886e8a3356ae4ebcf8aedf4a15733861b68814fc namespace=k8s.io Jun 25 14:15:40.916290 containerd[1350]: time="2024-06-25T14:15:40.916286532Z" level=warning msg="cleaning up after shim disconnected" id=04cfa9fd033efd607bea9b8f886e8a3356ae4ebcf8aedf4a15733861b68814fc namespace=k8s.io Jun 25 14:15:40.916290 containerd[1350]: time="2024-06-25T14:15:40.916294692Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:15:40.950210 kubelet[2386]: I0625 14:15:40.949380 2386 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 14:15:40.971530 kubelet[2386]: I0625 14:15:40.971485 2386 topology_manager.go:215] "Topology Admit Handler" podUID="5ba46353-450d-46a3-a19c-54c7d8f17c69" podNamespace="calico-system" podName="calico-kube-controllers-668cb8f956-lh8rc" Jun 25 14:15:40.975426 kubelet[2386]: I0625 14:15:40.974710 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba46353-450d-46a3-a19c-54c7d8f17c69-tigera-ca-bundle\") pod \"calico-kube-controllers-668cb8f956-lh8rc\" (UID: \"5ba46353-450d-46a3-a19c-54c7d8f17c69\") " pod="calico-system/calico-kube-controllers-668cb8f956-lh8rc" Jun 25 14:15:40.975426 kubelet[2386]: I0625 14:15:40.974753 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w4sg\" (UniqueName: \"kubernetes.io/projected/5ba46353-450d-46a3-a19c-54c7d8f17c69-kube-api-access-6w4sg\") pod \"calico-kube-controllers-668cb8f956-lh8rc\" (UID: \"5ba46353-450d-46a3-a19c-54c7d8f17c69\") " pod="calico-system/calico-kube-controllers-668cb8f956-lh8rc" Jun 25 14:15:40.976707 kubelet[2386]: I0625 14:15:40.976641 2386 topology_manager.go:215] "Topology Admit Handler" podUID="5e87b194-5eb9-4034-8536-a78f10e6f560" podNamespace="kube-system" podName="coredns-5dd5756b68-5rqtj" Jun 25 14:15:40.977219 kubelet[2386]: I0625 14:15:40.977195 2386 topology_manager.go:215] "Topology Admit Handler" podUID="58967aa2-30a2-441d-bdef-2abe02a8e0ec" podNamespace="kube-system" podName="coredns-5dd5756b68-7dwb5" Jun 25 14:15:41.075886 kubelet[2386]: I0625 14:15:41.075801 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e87b194-5eb9-4034-8536-a78f10e6f560-config-volume\") pod \"coredns-5dd5756b68-5rqtj\" (UID: \"5e87b194-5eb9-4034-8536-a78f10e6f560\") " pod="kube-system/coredns-5dd5756b68-5rqtj" Jun 25 14:15:41.076051 kubelet[2386]: I0625 14:15:41.075928 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58967aa2-30a2-441d-bdef-2abe02a8e0ec-config-volume\") pod \"coredns-5dd5756b68-7dwb5\" (UID: \"58967aa2-30a2-441d-bdef-2abe02a8e0ec\") " pod="kube-system/coredns-5dd5756b68-7dwb5" Jun 25 14:15:41.076051 kubelet[2386]: I0625 14:15:41.075985 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qhgj\" (UniqueName: \"kubernetes.io/projected/5e87b194-5eb9-4034-8536-a78f10e6f560-kube-api-access-8qhgj\") pod \"coredns-5dd5756b68-5rqtj\" (UID: \"5e87b194-5eb9-4034-8536-a78f10e6f560\") " pod="kube-system/coredns-5dd5756b68-5rqtj" Jun 25 14:15:41.076051 kubelet[2386]: I0625 14:15:41.076012 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hpm8\" (UniqueName: \"kubernetes.io/projected/58967aa2-30a2-441d-bdef-2abe02a8e0ec-kube-api-access-9hpm8\") pod \"coredns-5dd5756b68-7dwb5\" (UID: \"58967aa2-30a2-441d-bdef-2abe02a8e0ec\") " pod="kube-system/coredns-5dd5756b68-7dwb5" Jun 25 14:15:41.275377 containerd[1350]: time="2024-06-25T14:15:41.275317566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668cb8f956-lh8rc,Uid:5ba46353-450d-46a3-a19c-54c7d8f17c69,Namespace:calico-system,Attempt:0,}" Jun 25 14:15:41.280202 kubelet[2386]: E0625 14:15:41.280168 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:41.280686 containerd[1350]: time="2024-06-25T14:15:41.280629802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5rqtj,Uid:5e87b194-5eb9-4034-8536-a78f10e6f560,Namespace:kube-system,Attempt:0,}" Jun 25 14:15:41.297214 kubelet[2386]: E0625 14:15:41.297175 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:41.298255 containerd[1350]: time="2024-06-25T14:15:41.297860431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7dwb5,Uid:58967aa2-30a2-441d-bdef-2abe02a8e0ec,Namespace:kube-system,Attempt:0,}" Jun 25 14:15:41.742172 containerd[1350]: time="2024-06-25T14:15:41.742038852Z" level=error msg="Failed to destroy network for sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.745938 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b-shm.mount: Deactivated successfully. Jun 25 14:15:41.746287 containerd[1350]: time="2024-06-25T14:15:41.746035489Z" level=error msg="Failed to destroy network for sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.746447 containerd[1350]: time="2024-06-25T14:15:41.746384769Z" level=error msg="Failed to destroy network for sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.746884 containerd[1350]: time="2024-06-25T14:15:41.746848688Z" level=error msg="encountered an error cleaning up failed sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.748037 containerd[1350]: time="2024-06-25T14:15:41.747982448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668cb8f956-lh8rc,Uid:5ba46353-450d-46a3-a19c-54c7d8f17c69,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.748441 containerd[1350]: time="2024-06-25T14:15:41.748397487Z" level=error msg="encountered an error cleaning up failed sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.748455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d-shm.mount: Deactivated successfully. Jun 25 14:15:41.748585 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168-shm.mount: Deactivated successfully. Jun 25 14:15:41.748729 containerd[1350]: time="2024-06-25T14:15:41.748414567Z" level=error msg="encountered an error cleaning up failed sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.748782 containerd[1350]: time="2024-06-25T14:15:41.748750407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5rqtj,Uid:5e87b194-5eb9-4034-8536-a78f10e6f560,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.748842 containerd[1350]: time="2024-06-25T14:15:41.748695407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7dwb5,Uid:58967aa2-30a2-441d-bdef-2abe02a8e0ec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.749302 kubelet[2386]: E0625 14:15:41.749277 2386 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.749436 kubelet[2386]: E0625 14:15:41.749337 2386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-5rqtj" Jun 25 14:15:41.749436 kubelet[2386]: E0625 14:15:41.749358 2386 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-5rqtj" Jun 25 14:15:41.749436 kubelet[2386]: E0625 14:15:41.749422 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-5rqtj_kube-system(5e87b194-5eb9-4034-8536-a78f10e6f560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-5rqtj_kube-system(5e87b194-5eb9-4034-8536-a78f10e6f560)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-5rqtj" podUID="5e87b194-5eb9-4034-8536-a78f10e6f560" Jun 25 14:15:41.750152 kubelet[2386]: E0625 14:15:41.750098 2386 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.750152 kubelet[2386]: E0625 14:15:41.750151 2386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-7dwb5" Jun 25 14:15:41.750290 kubelet[2386]: E0625 14:15:41.750172 2386 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-7dwb5" Jun 25 14:15:41.750290 kubelet[2386]: E0625 14:15:41.750234 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-7dwb5_kube-system(58967aa2-30a2-441d-bdef-2abe02a8e0ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-7dwb5_kube-system(58967aa2-30a2-441d-bdef-2abe02a8e0ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-7dwb5" podUID="58967aa2-30a2-441d-bdef-2abe02a8e0ec" Jun 25 14:15:41.751318 kubelet[2386]: E0625 14:15:41.751247 2386 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.751318 kubelet[2386]: E0625 14:15:41.751292 2386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-668cb8f956-lh8rc" Jun 25 14:15:41.751318 kubelet[2386]: E0625 14:15:41.751310 2386 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-668cb8f956-lh8rc" Jun 25 14:15:41.751490 kubelet[2386]: E0625 14:15:41.751361 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-668cb8f956-lh8rc_calico-system(5ba46353-450d-46a3-a19c-54c7d8f17c69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-668cb8f956-lh8rc_calico-system(5ba46353-450d-46a3-a19c-54c7d8f17c69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-668cb8f956-lh8rc" podUID="5ba46353-450d-46a3-a19c-54c7d8f17c69" Jun 25 14:15:41.911942 kubelet[2386]: I0625 14:15:41.911885 2386 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:15:41.912546 containerd[1350]: time="2024-06-25T14:15:41.912501497Z" level=info msg="StopPodSandbox for \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\"" Jun 25 14:15:41.914023 containerd[1350]: time="2024-06-25T14:15:41.912692777Z" level=info msg="Ensure that sandbox 61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d in task-service has been cleanup successfully" Jun 25 14:15:41.915630 kubelet[2386]: I0625 14:15:41.915608 2386 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:15:41.917461 containerd[1350]: time="2024-06-25T14:15:41.916106014Z" level=info msg="StopPodSandbox for \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\"" Jun 25 14:15:41.917461 containerd[1350]: time="2024-06-25T14:15:41.916381374Z" level=info msg="Ensure that sandbox 254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b in task-service has been cleanup successfully" Jun 25 14:15:41.919535 kubelet[2386]: E0625 14:15:41.919509 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:41.929918 containerd[1350]: time="2024-06-25T14:15:41.923726849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 14:15:41.929918 containerd[1350]: time="2024-06-25T14:15:41.926074168Z" level=info msg="StopPodSandbox for \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\"" Jun 25 14:15:41.929918 containerd[1350]: time="2024-06-25T14:15:41.926282287Z" level=info msg="Ensure that sandbox 877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168 in task-service has been cleanup successfully" Jun 25 14:15:41.930099 kubelet[2386]: I0625 14:15:41.924771 2386 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:15:41.949612 containerd[1350]: time="2024-06-25T14:15:41.949539592Z" level=error msg="StopPodSandbox for \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\" failed" error="failed to destroy network for sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.950229 containerd[1350]: time="2024-06-25T14:15:41.950166911Z" level=error msg="StopPodSandbox for \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\" failed" error="failed to destroy network for sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.952630 kubelet[2386]: E0625 14:15:41.952588 2386 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:15:41.952719 kubelet[2386]: E0625 14:15:41.952689 2386 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d"} Jun 25 14:15:41.952772 kubelet[2386]: E0625 14:15:41.952724 2386 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e87b194-5eb9-4034-8536-a78f10e6f560\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:15:41.952772 kubelet[2386]: E0625 14:15:41.952755 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e87b194-5eb9-4034-8536-a78f10e6f560\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-5rqtj" podUID="5e87b194-5eb9-4034-8536-a78f10e6f560" Jun 25 14:15:41.953064 kubelet[2386]: E0625 14:15:41.953042 2386 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:15:41.953114 kubelet[2386]: E0625 14:15:41.953085 2386 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b"} Jun 25 14:15:41.953142 kubelet[2386]: E0625 14:15:41.953119 2386 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58967aa2-30a2-441d-bdef-2abe02a8e0ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:15:41.953208 kubelet[2386]: E0625 14:15:41.953144 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58967aa2-30a2-441d-bdef-2abe02a8e0ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-7dwb5" podUID="58967aa2-30a2-441d-bdef-2abe02a8e0ec" Jun 25 14:15:41.968920 containerd[1350]: time="2024-06-25T14:15:41.968852499Z" level=error msg="StopPodSandbox for \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\" failed" error="failed to destroy network for sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:41.969144 kubelet[2386]: E0625 14:15:41.969112 2386 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:15:41.969199 kubelet[2386]: E0625 14:15:41.969156 2386 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168"} Jun 25 14:15:41.969240 kubelet[2386]: E0625 14:15:41.969211 2386 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ba46353-450d-46a3-a19c-54c7d8f17c69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:15:41.969292 kubelet[2386]: E0625 14:15:41.969281 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ba46353-450d-46a3-a19c-54c7d8f17c69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-668cb8f956-lh8rc" podUID="5ba46353-450d-46a3-a19c-54c7d8f17c69" Jun 25 14:15:42.847681 containerd[1350]: time="2024-06-25T14:15:42.847637262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8lrw,Uid:ab547801-7d4b-41c4-b3b9-81712e462073,Namespace:calico-system,Attempt:0,}" Jun 25 14:15:42.936698 containerd[1350]: time="2024-06-25T14:15:42.936614726Z" level=error msg="Failed to destroy network for sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:42.940082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282-shm.mount: Deactivated successfully. Jun 25 14:15:42.942069 containerd[1350]: time="2024-06-25T14:15:42.938597365Z" level=error msg="encountered an error cleaning up failed sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:42.942145 containerd[1350]: time="2024-06-25T14:15:42.942107483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8lrw,Uid:ab547801-7d4b-41c4-b3b9-81712e462073,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:42.942422 kubelet[2386]: E0625 14:15:42.942366 2386 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:42.942648 kubelet[2386]: E0625 14:15:42.942440 2386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8lrw" Jun 25 14:15:42.942648 kubelet[2386]: E0625 14:15:42.942463 2386 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8lrw" Jun 25 14:15:42.942648 kubelet[2386]: E0625 14:15:42.942514 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v8lrw_calico-system(ab547801-7d4b-41c4-b3b9-81712e462073)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v8lrw_calico-system(ab547801-7d4b-41c4-b3b9-81712e462073)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8lrw" podUID="ab547801-7d4b-41c4-b3b9-81712e462073" Jun 25 14:15:43.931170 kubelet[2386]: I0625 14:15:43.931137 2386 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:15:43.938126 containerd[1350]: time="2024-06-25T14:15:43.938078051Z" level=info msg="StopPodSandbox for \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\"" Jun 25 14:15:43.938481 containerd[1350]: time="2024-06-25T14:15:43.938284970Z" level=info msg="Ensure that sandbox 968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282 in task-service has been cleanup successfully" Jun 25 14:15:43.971702 containerd[1350]: time="2024-06-25T14:15:43.971643671Z" level=error msg="StopPodSandbox for \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\" failed" error="failed to destroy network for sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:15:43.971956 kubelet[2386]: E0625 14:15:43.971933 2386 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:15:43.972220 kubelet[2386]: E0625 14:15:43.971990 2386 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282"} Jun 25 14:15:43.972220 kubelet[2386]: E0625 14:15:43.972026 2386 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab547801-7d4b-41c4-b3b9-81712e462073\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:15:43.972220 kubelet[2386]: E0625 14:15:43.972053 2386 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab547801-7d4b-41c4-b3b9-81712e462073\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8lrw" podUID="ab547801-7d4b-41c4-b3b9-81712e462073" Jun 25 14:15:44.958677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3393231716.mount: Deactivated successfully. Jun 25 14:15:45.246832 containerd[1350]: time="2024-06-25T14:15:45.246778810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:45.247487 containerd[1350]: time="2024-06-25T14:15:45.247458650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 14:15:45.248180 containerd[1350]: time="2024-06-25T14:15:45.248146409Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:45.249859 containerd[1350]: time="2024-06-25T14:15:45.249823649Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:45.251036 containerd[1350]: time="2024-06-25T14:15:45.251000328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:45.251971 containerd[1350]: time="2024-06-25T14:15:45.251936087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.328158998s" Jun 25 14:15:45.252033 containerd[1350]: time="2024-06-25T14:15:45.251972807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 14:15:45.261646 containerd[1350]: time="2024-06-25T14:15:45.260165763Z" level=info msg="CreateContainer within sandbox \"aefa507dbe99c027c43cd5d101e11760152dfd20705d09b33cfca3bd061a9964\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 14:15:45.271793 containerd[1350]: time="2024-06-25T14:15:45.271744997Z" level=info msg="CreateContainer within sandbox \"aefa507dbe99c027c43cd5d101e11760152dfd20705d09b33cfca3bd061a9964\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7bc79e2addcfb6e61f3bddd13c43bf6670bf1c7d879f25d4fe148c826662eb99\"" Jun 25 14:15:45.272239 containerd[1350]: time="2024-06-25T14:15:45.272192037Z" level=info msg="StartContainer for \"7bc79e2addcfb6e61f3bddd13c43bf6670bf1c7d879f25d4fe148c826662eb99\"" Jun 25 14:15:45.357555 containerd[1350]: time="2024-06-25T14:15:45.357503753Z" level=info msg="StartContainer for \"7bc79e2addcfb6e61f3bddd13c43bf6670bf1c7d879f25d4fe148c826662eb99\" returns successfully" Jun 25 14:15:45.531105 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 14:15:45.531240 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 14:15:45.941019 kubelet[2386]: E0625 14:15:45.940988 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:45.955551 kubelet[2386]: I0625 14:15:45.955513 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-pqgng" podStartSLOduration=1.147324083 podCreationTimestamp="2024-06-25 14:15:35 +0000 UTC" firstStartedPulling="2024-06-25 14:15:35.444053289 +0000 UTC m=+19.725367315" lastFinishedPulling="2024-06-25 14:15:45.252199327 +0000 UTC m=+29.533513353" observedRunningTime="2024-06-25 14:15:45.955221201 +0000 UTC m=+30.236535267" watchObservedRunningTime="2024-06-25 14:15:45.955470121 +0000 UTC m=+30.236784147" Jun 25 14:15:46.809000 audit[3500]: AVC avc: denied { write } for pid=3500 comm="tee" name="fd" dev="proc" ino=19043 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:15:46.812434 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 14:15:46.812529 kernel: audit: type=1400 audit(1719324946.809:287): avc: denied { write } for pid=3500 comm="tee" name="fd" dev="proc" ino=19043 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:15:46.812558 kernel: audit: type=1300 audit(1719324946.809:287): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe7ec6a17 a2=241 a3=1b6 items=1 ppid=3468 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:46.809000 audit[3500]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe7ec6a17 a2=241 a3=1b6 items=1 ppid=3468 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:46.809000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 14:15:46.816019 kernel: audit: type=1307 audit(1719324946.809:287): cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 14:15:46.816076 kernel: audit: type=1302 audit(1719324946.809:287): item=0 name="/dev/fd/63" inode=19666 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:15:46.809000 audit: PATH item=0 name="/dev/fd/63" inode=19666 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:15:46.809000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:15:46.819952 kernel: audit: type=1327 audit(1719324946.809:287): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:15:46.820035 kernel: audit: type=1400 audit(1719324946.814:288): avc: denied { write } for pid=3515 comm="tee" name="fd" dev="proc" ino=19678 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:15:46.814000 audit[3515]: AVC avc: denied { write } for pid=3515 comm="tee" name="fd" dev="proc" ino=19678 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:15:46.814000 audit[3515]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffed127a26 a2=241 a3=1b6 items=1 ppid=3473 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:46.828429 kernel: audit: type=1300 audit(1719324946.814:288): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffed127a26 a2=241 a3=1b6 items=1 ppid=3473 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:46.814000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 14:15:46.830195 kernel: audit: type=1307 audit(1719324946.814:288): cwd="/etc/service/enabled/confd/log" Jun 25 14:15:46.830268 kernel: audit: type=1302 audit(1719324946.814:288): item=0 name="/dev/fd/63" inode=19669 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:15:46.814000 audit: PATH item=0 name="/dev/fd/63" inode=19669 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:15:46.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:15:46.836688 kernel: audit: type=1327 audit(1719324946.814:288): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:15:46.818000 audit[3520]: AVC avc: denied { write } for pid=3520 comm="tee" name="fd" dev="proc" ino=19049 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:15:46.818000 audit[3520]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd47e2a26 a2=241 a3=1b6 items=1 ppid=3460 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:46.818000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 14:15:46.818000 audit: PATH item=0 name="/dev/fd/63" inode=19672 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:15:46.818000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:15:46.818000 audit[3526]: AVC avc: denied { write } for pid=3526 comm="tee" name="fd" dev="proc" ino=19684 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:15:46.818000 audit[3526]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc08d0a27 a2=241 a3=1b6 items=1 ppid=3465 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:46.818000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 14:15:46.818000 audit: PATH item=0 name="/dev/fd/63" inode=19673 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:15:46.818000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:15:46.828000 audit[3510]: AVC avc: denied { write } for pid=3510 comm="tee" name="fd" dev="proc" ino=19690 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:15:46.828000 audit[3510]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd6577a16 a2=241 a3=1b6 items=1 ppid=3472 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:46.828000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 14:15:46.828000 audit: PATH item=0 name="/dev/fd/63" inode=19036 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:15:46.828000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:15:46.836000 audit[3532]: AVC avc: denied { write } for pid=3532 comm="tee" name="fd" dev="proc" ino=19054 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:15:46.836000 audit[3532]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd05d8a26 a2=241 a3=1b6 items=1 ppid=3464 pid=3532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:46.836000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 14:15:46.836000 audit: PATH item=0 name="/dev/fd/63" inode=18243 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:15:46.836000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:15:46.851000 audit[3539]: AVC avc: denied { write } for pid=3539 comm="tee" name="fd" dev="proc" ino=19694 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:15:46.851000 audit[3539]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffc37ca28 a2=241 a3=1b6 items=1 ppid=3461 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:46.851000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 14:15:46.851000 audit: PATH item=0 name="/dev/fd/63" inode=19051 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:15:46.851000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:15:46.948068 kubelet[2386]: I0625 14:15:46.948039 2386 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:15:46.948907 kubelet[2386]: E0625 14:15:46.948863 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:47.118811 systemd-networkd[1138]: vxlan.calico: Link UP Jun 25 14:15:47.118819 systemd-networkd[1138]: vxlan.calico: Gained carrier Jun 25 14:15:47.140000 audit: BPF prog-id=10 op=LOAD Jun 25 14:15:47.140000 audit[3608]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcfc9a6e8 a2=70 a3=ffffcfc9a758 items=0 ppid=3462 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:47.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:15:47.140000 audit: BPF prog-id=10 op=UNLOAD Jun 25 14:15:47.140000 audit: BPF prog-id=11 op=LOAD Jun 25 14:15:47.140000 audit[3608]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcfc9a6e8 a2=70 a3=4b243c items=0 ppid=3462 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:47.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:15:47.140000 audit: BPF prog-id=11 op=UNLOAD Jun 25 14:15:47.140000 audit: BPF prog-id=12 op=LOAD Jun 25 14:15:47.140000 audit[3608]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcfc9a688 a2=70 a3=ffffcfc9a6f8 items=0 ppid=3462 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:47.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:15:47.140000 audit: BPF prog-id=12 op=UNLOAD Jun 25 14:15:47.141000 audit: BPF prog-id=13 op=LOAD Jun 25 14:15:47.141000 audit[3608]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcfc9a6b8 a2=70 a3=9f7b4a9 items=0 ppid=3462 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:47.141000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:15:47.159000 audit: BPF prog-id=13 op=UNLOAD Jun 25 14:15:47.212000 audit[3639]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3639 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:15:47.212000 audit[3639]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe5d1ebc0 a2=0 a3=ffff8176cfa8 items=0 ppid=3462 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:47.212000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:15:47.220000 audit[3640]: NETFILTER_CFG table=raw:98 family=2 entries=19 op=nft_register_chain pid=3640 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:15:47.220000 audit[3640]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6992 a0=3 a1=fffffe0438d0 a2=0 a3=ffff904cdfa8 items=0 ppid=3462 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:47.220000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:15:47.226000 audit[3641]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=3641 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:15:47.226000 audit[3641]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe25b9600 a2=0 a3=ffffb9352fa8 items=0 ppid=3462 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:47.226000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:15:47.227000 audit[3643]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3643 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:15:47.227000 audit[3643]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffeb306d50 a2=0 a3=ffff9ed5cfa8 items=0 ppid=3462 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:47.227000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:15:48.164048 systemd-networkd[1138]: vxlan.calico: Gained IPv6LL Jun 25 14:15:51.973549 kubelet[2386]: I0625 14:15:51.973505 2386 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:15:51.975752 kubelet[2386]: E0625 14:15:51.974410 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:52.143191 systemd[1]: run-containerd-runc-k8s.io-7bc79e2addcfb6e61f3bddd13c43bf6670bf1c7d879f25d4fe148c826662eb99-runc.f9KALN.mount: Deactivated successfully. Jun 25 14:15:52.962527 kubelet[2386]: E0625 14:15:52.962500 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:53.840471 containerd[1350]: time="2024-06-25T14:15:53.840422799Z" level=info msg="StopPodSandbox for \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\"" Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:53.947 [INFO][3724] k8s.go 608: Cleaning up netns ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:53.947 [INFO][3724] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" iface="eth0" netns="/var/run/netns/cni-3b9e5a8f-0bed-7c21-7506-d26048db3f74" Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:53.948 [INFO][3724] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" iface="eth0" netns="/var/run/netns/cni-3b9e5a8f-0bed-7c21-7506-d26048db3f74" Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:53.948 [INFO][3724] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" iface="eth0" netns="/var/run/netns/cni-3b9e5a8f-0bed-7c21-7506-d26048db3f74" Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:53.948 [INFO][3724] k8s.go 615: Releasing IP address(es) ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:53.948 [INFO][3724] utils.go 188: Calico CNI releasing IP address ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:54.035 [INFO][3736] ipam_plugin.go 411: Releasing address using handleID ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" HandleID="k8s-pod-network.877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:54.035 [INFO][3736] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:54.035 [INFO][3736] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:54.053 [WARNING][3736] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" HandleID="k8s-pod-network.877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:54.054 [INFO][3736] ipam_plugin.go 439: Releasing address using workloadID ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" HandleID="k8s-pod-network.877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:54.056 [INFO][3736] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:15:54.061168 containerd[1350]: 2024-06-25 14:15:54.059 [INFO][3724] k8s.go 621: Teardown processing complete. ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:15:54.063800 containerd[1350]: time="2024-06-25T14:15:54.063483731Z" level=info msg="TearDown network for sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\" successfully" Jun 25 14:15:54.063800 containerd[1350]: time="2024-06-25T14:15:54.063520931Z" level=info msg="StopPodSandbox for \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\" returns successfully" Jun 25 14:15:54.063286 systemd[1]: run-netns-cni\x2d3b9e5a8f\x2d0bed\x2d7c21\x2d7506\x2dd26048db3f74.mount: Deactivated successfully. Jun 25 14:15:54.064428 containerd[1350]: time="2024-06-25T14:15:54.064395771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668cb8f956-lh8rc,Uid:5ba46353-450d-46a3-a19c-54c7d8f17c69,Namespace:calico-system,Attempt:1,}" Jun 25 14:15:54.209366 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:15:54.209464 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3dd6b29008c: link becomes ready Jun 25 14:15:54.206545 systemd-networkd[1138]: cali3dd6b29008c: Link UP Jun 25 14:15:54.208238 systemd-networkd[1138]: cali3dd6b29008c: Gained carrier Jun 25 14:15:54.230675 systemd[1]: Started sshd@7-10.0.0.23:22-10.0.0.1:44506.service - OpenSSH per-connection server daemon (10.0.0.1:44506). Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.133 [INFO][3744] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0 calico-kube-controllers-668cb8f956- calico-system 5ba46353-450d-46a3-a19c-54c7d8f17c69 726 0 2024-06-25 14:15:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:668cb8f956 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-668cb8f956-lh8rc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3dd6b29008c [] []}} ContainerID="3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" Namespace="calico-system" Pod="calico-kube-controllers-668cb8f956-lh8rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.134 [INFO][3744] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" Namespace="calico-system" Pod="calico-kube-controllers-668cb8f956-lh8rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.160 [INFO][3758] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" HandleID="k8s-pod-network.3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.173 [INFO][3758] ipam_plugin.go 264: Auto assigning IP ContainerID="3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" HandleID="k8s-pod-network.3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f2780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-668cb8f956-lh8rc", "timestamp":"2024-06-25 14:15:54.160673543 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.173 [INFO][3758] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.173 [INFO][3758] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.173 [INFO][3758] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.175 [INFO][3758] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" host="localhost" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.182 [INFO][3758] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.186 [INFO][3758] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.188 [INFO][3758] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.190 [INFO][3758] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.190 [INFO][3758] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" host="localhost" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.192 [INFO][3758] ipam.go 1685: Creating new handle: k8s-pod-network.3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2 Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.195 [INFO][3758] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" host="localhost" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.200 [INFO][3758] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" host="localhost" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.200 [INFO][3758] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" host="localhost" Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.200 [INFO][3758] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:15:54.231218 containerd[1350]: 2024-06-25 14:15:54.200 [INFO][3758] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" HandleID="k8s-pod-network.3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:15:54.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.23:22-10.0.0.1:44506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.231801 containerd[1350]: 2024-06-25 14:15:54.202 [INFO][3744] k8s.go 386: Populated endpoint ContainerID="3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" Namespace="calico-system" Pod="calico-kube-controllers-668cb8f956-lh8rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0", GenerateName:"calico-kube-controllers-668cb8f956-", Namespace:"calico-system", SelfLink:"", UID:"5ba46353-450d-46a3-a19c-54c7d8f17c69", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668cb8f956", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-668cb8f956-lh8rc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3dd6b29008c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:15:54.231801 containerd[1350]: 2024-06-25 14:15:54.202 [INFO][3744] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" Namespace="calico-system" Pod="calico-kube-controllers-668cb8f956-lh8rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:15:54.231801 containerd[1350]: 2024-06-25 14:15:54.202 [INFO][3744] dataplane_linux.go 68: Setting the host side veth name to cali3dd6b29008c ContainerID="3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" Namespace="calico-system" Pod="calico-kube-controllers-668cb8f956-lh8rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:15:54.231801 containerd[1350]: 2024-06-25 14:15:54.208 [INFO][3744] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" Namespace="calico-system" Pod="calico-kube-controllers-668cb8f956-lh8rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:15:54.231801 containerd[1350]: 2024-06-25 14:15:54.209 [INFO][3744] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" Namespace="calico-system" Pod="calico-kube-controllers-668cb8f956-lh8rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0", GenerateName:"calico-kube-controllers-668cb8f956-", Namespace:"calico-system", SelfLink:"", UID:"5ba46353-450d-46a3-a19c-54c7d8f17c69", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668cb8f956", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2", Pod:"calico-kube-controllers-668cb8f956-lh8rc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3dd6b29008c", MAC:"d6:a2:b3:ae:56:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:15:54.231801 containerd[1350]: 2024-06-25 14:15:54.229 [INFO][3744] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2" Namespace="calico-system" Pod="calico-kube-controllers-668cb8f956-lh8rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:15:54.233231 kernel: kauditd_printk_skb: 53 callbacks suppressed Jun 25 14:15:54.233275 kernel: audit: type=1130 audit(1719324954.229:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.23:22-10.0.0.1:44506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.241000 audit[3777]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3777 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:15:54.241000 audit[3777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=ffffc8f9e300 a2=0 a3=ffffa0383fa8 items=0 ppid=3462 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:54.247471 kernel: audit: type=1325 audit(1719324954.241:307): table=filter:101 family=2 entries=34 op=nft_register_chain pid=3777 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:15:54.247598 kernel: audit: type=1300 audit(1719324954.241:307): arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=ffffc8f9e300 a2=0 a3=ffffa0383fa8 items=0 ppid=3462 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:54.247619 kernel: audit: type=1327 audit(1719324954.241:307): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:15:54.241000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:15:54.256914 containerd[1350]: time="2024-06-25T14:15:54.256819155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:15:54.256914 containerd[1350]: time="2024-06-25T14:15:54.256868995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:54.256914 containerd[1350]: time="2024-06-25T14:15:54.256904555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:15:54.257060 containerd[1350]: time="2024-06-25T14:15:54.256915315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:54.278000 audit[3768]: USER_ACCT pid=3768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:54.279748 sshd[3768]: Accepted publickey for core from 10.0.0.1 port 44506 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:15:54.279562 systemd-resolved[1265]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:15:54.281245 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:15:54.279000 audit[3768]: CRED_ACQ pid=3768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:54.284280 kernel: audit: type=1101 audit(1719324954.278:308): pid=3768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:54.284342 kernel: audit: type=1103 audit(1719324954.279:309): pid=3768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:54.284366 kernel: audit: type=1006 audit(1719324954.280:310): pid=3768 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jun 25 14:15:54.288964 kernel: audit: type=1300 audit(1719324954.280:310): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff318f50 a2=3 a3=1 items=0 ppid=1 pid=3768 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:54.289032 kernel: audit: type=1327 audit(1719324954.280:310): proctitle=737368643A20636F7265205B707269765D Jun 25 14:15:54.280000 audit[3768]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff318f50 a2=3 a3=1 items=0 ppid=1 pid=3768 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:54.280000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:15:54.290944 systemd-logind[1335]: New session 8 of user core. Jun 25 14:15:54.295185 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 14:15:54.298000 audit[3768]: USER_START pid=3768 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:54.302958 kernel: audit: type=1105 audit(1719324954.298:311): pid=3768 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:54.302000 audit[3821]: CRED_ACQ pid=3821 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:54.327506 containerd[1350]: time="2024-06-25T14:15:54.327463854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668cb8f956-lh8rc,Uid:5ba46353-450d-46a3-a19c-54c7d8f17c69,Namespace:calico-system,Attempt:1,} returns sandbox id \"3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2\"" Jun 25 14:15:54.329265 containerd[1350]: time="2024-06-25T14:15:54.329234733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 14:15:54.490603 sshd[3768]: pam_unix(sshd:session): session closed for user core Jun 25 14:15:54.490000 audit[3768]: USER_END pid=3768 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:54.491000 audit[3768]: CRED_DISP pid=3768 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:54.493614 systemd[1]: sshd@7-10.0.0.23:22-10.0.0.1:44506.service: Deactivated successfully. Jun 25 14:15:54.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.23:22-10.0.0.1:44506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.494647 systemd-logind[1335]: Session 8 logged out. Waiting for processes to exit. Jun 25 14:15:54.494686 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 14:15:54.497169 systemd-logind[1335]: Removed session 8. Jun 25 14:15:55.609186 containerd[1350]: time="2024-06-25T14:15:55.609125572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:55.609760 containerd[1350]: time="2024-06-25T14:15:55.609713412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 14:15:55.610583 containerd[1350]: time="2024-06-25T14:15:55.610554491Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:55.611996 containerd[1350]: time="2024-06-25T14:15:55.611958811Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:55.613346 containerd[1350]: time="2024-06-25T14:15:55.613304931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:15:55.614076 containerd[1350]: time="2024-06-25T14:15:55.614045931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.284620358s" Jun 25 14:15:55.614119 containerd[1350]: time="2024-06-25T14:15:55.614082851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 14:15:55.625994 containerd[1350]: time="2024-06-25T14:15:55.625945207Z" level=info msg="CreateContainer within sandbox \"3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 14:15:55.642874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277974741.mount: Deactivated successfully. Jun 25 14:15:55.644076 containerd[1350]: time="2024-06-25T14:15:55.642863363Z" level=info msg="CreateContainer within sandbox \"3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9a86ab285269a4dec9eedcf131a1fcf848a915edbb42b984b11998874d720c76\"" Jun 25 14:15:55.644076 containerd[1350]: time="2024-06-25T14:15:55.643682562Z" level=info msg="StartContainer for \"9a86ab285269a4dec9eedcf131a1fcf848a915edbb42b984b11998874d720c76\"" Jun 25 14:15:55.695447 containerd[1350]: time="2024-06-25T14:15:55.695390388Z" level=info msg="StartContainer for \"9a86ab285269a4dec9eedcf131a1fcf848a915edbb42b984b11998874d720c76\" returns successfully" Jun 25 14:15:55.841110 containerd[1350]: time="2024-06-25T14:15:55.841063189Z" level=info msg="StopPodSandbox for \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\"" Jun 25 14:15:55.841428 containerd[1350]: time="2024-06-25T14:15:55.841265788Z" level=info msg="StopPodSandbox for \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\"" Jun 25 14:15:55.996785 kubelet[2386]: I0625 14:15:55.987374 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-668cb8f956-lh8rc" podStartSLOduration=19.701844393000002 podCreationTimestamp="2024-06-25 14:15:35 +0000 UTC" firstStartedPulling="2024-06-25 14:15:54.328855814 +0000 UTC m=+38.610169840" lastFinishedPulling="2024-06-25 14:15:55.61433417 +0000 UTC m=+39.895648196" observedRunningTime="2024-06-25 14:15:55.98055255 +0000 UTC m=+40.261866576" watchObservedRunningTime="2024-06-25 14:15:55.987322749 +0000 UTC m=+40.268636775" Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:55.936 [INFO][3921] k8s.go 608: Cleaning up netns ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:55.936 [INFO][3921] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" iface="eth0" netns="/var/run/netns/cni-a523753e-923e-4978-0199-a06edf14fa03" Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:55.936 [INFO][3921] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" iface="eth0" netns="/var/run/netns/cni-a523753e-923e-4978-0199-a06edf14fa03" Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:55.937 [INFO][3921] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" iface="eth0" netns="/var/run/netns/cni-a523753e-923e-4978-0199-a06edf14fa03" Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:55.937 [INFO][3921] k8s.go 615: Releasing IP address(es) ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:55.937 [INFO][3921] utils.go 188: Calico CNI releasing IP address ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:55.965 [INFO][3932] ipam_plugin.go 411: Releasing address using handleID ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" HandleID="k8s-pod-network.254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:55.969 [INFO][3932] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:55.969 [INFO][3932] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:55.989 [WARNING][3932] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" HandleID="k8s-pod-network.254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:55.989 [INFO][3932] ipam_plugin.go 439: Releasing address using workloadID ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" HandleID="k8s-pod-network.254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:56.000 [INFO][3932] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:15:56.008737 containerd[1350]: 2024-06-25 14:15:56.004 [INFO][3921] k8s.go 621: Teardown processing complete. ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:15:56.009494 containerd[1350]: time="2024-06-25T14:15:56.009456743Z" level=info msg="TearDown network for sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\" successfully" Jun 25 14:15:56.009577 containerd[1350]: time="2024-06-25T14:15:56.009559543Z" level=info msg="StopPodSandbox for \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\" returns successfully" Jun 25 14:15:56.009999 kubelet[2386]: E0625 14:15:56.009977 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:56.010436 containerd[1350]: time="2024-06-25T14:15:56.010409822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7dwb5,Uid:58967aa2-30a2-441d-bdef-2abe02a8e0ec,Namespace:kube-system,Attempt:1,}" Jun 25 14:15:56.066922 systemd[1]: run-netns-cni\x2da523753e\x2d923e\x2d4978\x2d0199\x2da06edf14fa03.mount: Deactivated successfully. Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:55.936 [INFO][3916] k8s.go 608: Cleaning up netns ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:55.936 [INFO][3916] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" iface="eth0" netns="/var/run/netns/cni-73709a54-c864-9747-9ac3-0a083d025051" Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:55.936 [INFO][3916] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" iface="eth0" netns="/var/run/netns/cni-73709a54-c864-9747-9ac3-0a083d025051" Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:55.937 [INFO][3916] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" iface="eth0" netns="/var/run/netns/cni-73709a54-c864-9747-9ac3-0a083d025051" Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:55.937 [INFO][3916] k8s.go 615: Releasing IP address(es) ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:55.937 [INFO][3916] utils.go 188: Calico CNI releasing IP address ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:55.968 [INFO][3931] ipam_plugin.go 411: Releasing address using handleID ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" HandleID="k8s-pod-network.61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:55.969 [INFO][3931] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:56.004 [INFO][3931] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:56.052 [WARNING][3931] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" HandleID="k8s-pod-network.61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:56.052 [INFO][3931] ipam_plugin.go 439: Releasing address using workloadID ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" HandleID="k8s-pod-network.61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:56.055 [INFO][3931] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:15:56.071411 containerd[1350]: 2024-06-25 14:15:56.064 [INFO][3916] k8s.go 621: Teardown processing complete. ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:15:56.087948 containerd[1350]: time="2024-06-25T14:15:56.075522526Z" level=info msg="TearDown network for sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\" successfully" Jun 25 14:15:56.087948 containerd[1350]: time="2024-06-25T14:15:56.075567086Z" level=info msg="StopPodSandbox for \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\" returns successfully" Jun 25 14:15:56.087948 containerd[1350]: time="2024-06-25T14:15:56.077333685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5rqtj,Uid:5e87b194-5eb9-4034-8536-a78f10e6f560,Namespace:kube-system,Attempt:1,}" Jun 25 14:15:56.077444 systemd[1]: run-netns-cni\x2d73709a54\x2dc864\x2d9747\x2d9ac3\x2d0a083d025051.mount: Deactivated successfully. Jun 25 14:15:56.088250 kubelet[2386]: E0625 14:15:56.075908 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:56.229592 systemd-networkd[1138]: cali3dd6b29008c: Gained IPv6LL Jun 25 14:15:56.253875 systemd-networkd[1138]: cali956e5c18019: Link UP Jun 25 14:15:56.255268 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:15:56.255371 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali956e5c18019: link becomes ready Jun 25 14:15:56.255491 systemd-networkd[1138]: cali956e5c18019: Gained carrier Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.151 [INFO][3978] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--5rqtj-eth0 coredns-5dd5756b68- kube-system 5e87b194-5eb9-4034-8536-a78f10e6f560 750 0 2024-06-25 14:15:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-5rqtj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali956e5c18019 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" Namespace="kube-system" Pod="coredns-5dd5756b68-5rqtj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5rqtj-" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.151 [INFO][3978] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" Namespace="kube-system" Pod="coredns-5dd5756b68-5rqtj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.193 [INFO][3993] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" HandleID="k8s-pod-network.d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.211 [INFO][3993] ipam_plugin.go 264: Auto assigning IP ContainerID="d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" HandleID="k8s-pod-network.d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058d4c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-5rqtj", "timestamp":"2024-06-25 14:15:56.193493416 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.211 [INFO][3993] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.211 [INFO][3993] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.211 [INFO][3993] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.217 [INFO][3993] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" host="localhost" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.223 [INFO][3993] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.227 [INFO][3993] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.230 [INFO][3993] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.232 [INFO][3993] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.232 [INFO][3993] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" host="localhost" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.234 [INFO][3993] ipam.go 1685: Creating new handle: k8s-pod-network.d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191 Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.241 [INFO][3993] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" host="localhost" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.246 [INFO][3993] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" host="localhost" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.246 [INFO][3993] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" host="localhost" Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.246 [INFO][3993] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:15:56.267816 containerd[1350]: 2024-06-25 14:15:56.246 [INFO][3993] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" HandleID="k8s-pod-network.d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:15:56.268605 containerd[1350]: 2024-06-25 14:15:56.249 [INFO][3978] k8s.go 386: Populated endpoint ContainerID="d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" Namespace="kube-system" Pod="coredns-5dd5756b68-5rqtj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--5rqtj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5e87b194-5eb9-4034-8536-a78f10e6f560", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-5rqtj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali956e5c18019", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:15:56.268605 containerd[1350]: 2024-06-25 14:15:56.249 [INFO][3978] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" Namespace="kube-system" Pod="coredns-5dd5756b68-5rqtj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:15:56.268605 containerd[1350]: 2024-06-25 14:15:56.249 [INFO][3978] dataplane_linux.go 68: Setting the host side veth name to cali956e5c18019 ContainerID="d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" Namespace="kube-system" Pod="coredns-5dd5756b68-5rqtj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:15:56.268605 containerd[1350]: 2024-06-25 14:15:56.256 [INFO][3978] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" Namespace="kube-system" Pod="coredns-5dd5756b68-5rqtj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:15:56.268605 containerd[1350]: 2024-06-25 14:15:56.256 [INFO][3978] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" Namespace="kube-system" Pod="coredns-5dd5756b68-5rqtj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--5rqtj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5e87b194-5eb9-4034-8536-a78f10e6f560", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191", Pod:"coredns-5dd5756b68-5rqtj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali956e5c18019", MAC:"ca:dc:df:45:41:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:15:56.268605 containerd[1350]: 2024-06-25 14:15:56.266 [INFO][3978] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191" Namespace="kube-system" Pod="coredns-5dd5756b68-5rqtj" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:15:56.285000 audit[4029]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:15:56.285000 audit[4029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20336 a0=3 a1=ffffd24e0940 a2=0 a3=ffffad761fa8 items=0 ppid=3462 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:56.285000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:15:56.294049 systemd-networkd[1138]: cali46f4affadcf: Link UP Jun 25 14:15:56.295000 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali46f4affadcf: link becomes ready Jun 25 14:15:56.294870 systemd-networkd[1138]: cali46f4affadcf: Gained carrier Jun 25 14:15:56.295876 containerd[1350]: time="2024-06-25T14:15:56.295331990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:15:56.295876 containerd[1350]: time="2024-06-25T14:15:56.295379790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:56.295876 containerd[1350]: time="2024-06-25T14:15:56.295395590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:15:56.295876 containerd[1350]: time="2024-06-25T14:15:56.295405510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.149 [INFO][3967] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--7dwb5-eth0 coredns-5dd5756b68- kube-system 58967aa2-30a2-441d-bdef-2abe02a8e0ec 751 0 2024-06-25 14:15:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-7dwb5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46f4affadcf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" Namespace="kube-system" Pod="coredns-5dd5756b68-7dwb5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7dwb5-" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.149 [INFO][3967] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" Namespace="kube-system" Pod="coredns-5dd5756b68-7dwb5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.207 [INFO][4000] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" HandleID="k8s-pod-network.79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.217 [INFO][4000] ipam_plugin.go 264: Auto assigning IP ContainerID="79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" HandleID="k8s-pod-network.79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d1a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-7dwb5", "timestamp":"2024-06-25 14:15:56.207247172 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.217 [INFO][4000] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.246 [INFO][4000] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.247 [INFO][4000] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.249 [INFO][4000] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" host="localhost" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.262 [INFO][4000] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.269 [INFO][4000] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.271 [INFO][4000] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.276 [INFO][4000] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.276 [INFO][4000] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" host="localhost" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.278 [INFO][4000] ipam.go 1685: Creating new handle: k8s-pod-network.79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4 Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.282 [INFO][4000] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" host="localhost" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.288 [INFO][4000] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" host="localhost" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.288 [INFO][4000] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" host="localhost" Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.288 [INFO][4000] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:15:56.309982 containerd[1350]: 2024-06-25 14:15:56.288 [INFO][4000] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" HandleID="k8s-pod-network.79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:15:56.310546 containerd[1350]: 2024-06-25 14:15:56.291 [INFO][3967] k8s.go 386: Populated endpoint ContainerID="79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" Namespace="kube-system" Pod="coredns-5dd5756b68-7dwb5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--7dwb5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"58967aa2-30a2-441d-bdef-2abe02a8e0ec", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-7dwb5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46f4affadcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:15:56.310546 containerd[1350]: 2024-06-25 14:15:56.291 [INFO][3967] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" Namespace="kube-system" Pod="coredns-5dd5756b68-7dwb5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:15:56.310546 containerd[1350]: 2024-06-25 14:15:56.291 [INFO][3967] dataplane_linux.go 68: Setting the host side veth name to cali46f4affadcf ContainerID="79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" Namespace="kube-system" Pod="coredns-5dd5756b68-7dwb5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:15:56.310546 containerd[1350]: 2024-06-25 14:15:56.295 [INFO][3967] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" Namespace="kube-system" Pod="coredns-5dd5756b68-7dwb5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:15:56.310546 containerd[1350]: 2024-06-25 14:15:56.295 [INFO][3967] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" Namespace="kube-system" Pod="coredns-5dd5756b68-7dwb5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--7dwb5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"58967aa2-30a2-441d-bdef-2abe02a8e0ec", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4", Pod:"coredns-5dd5756b68-7dwb5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46f4affadcf", MAC:"ba:ff:7a:5f:53:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:15:56.310546 containerd[1350]: 2024-06-25 14:15:56.305 [INFO][3967] k8s.go 500: Wrote updated endpoint to datastore ContainerID="79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4" Namespace="kube-system" Pod="coredns-5dd5756b68-7dwb5" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:15:56.316000 audit[4065]: NETFILTER_CFG table=filter:103 family=2 entries=34 op=nft_register_chain pid=4065 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:15:56.316000 audit[4065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18220 a0=3 a1=ffffd1f00d20 a2=0 a3=ffffaf607fa8 items=0 ppid=3462 pid=4065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:56.316000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:15:56.330817 containerd[1350]: time="2024-06-25T14:15:56.330365421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:15:56.330817 containerd[1350]: time="2024-06-25T14:15:56.330422621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:56.330817 containerd[1350]: time="2024-06-25T14:15:56.330524221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:15:56.330817 containerd[1350]: time="2024-06-25T14:15:56.330539701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:15:56.336586 systemd-resolved[1265]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:15:56.358696 systemd-resolved[1265]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:15:56.364762 containerd[1350]: time="2024-06-25T14:15:56.364707332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5rqtj,Uid:5e87b194-5eb9-4034-8536-a78f10e6f560,Namespace:kube-system,Attempt:1,} returns sandbox id \"d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191\"" Jun 25 14:15:56.365728 kubelet[2386]: E0625 14:15:56.365699 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:56.367607 containerd[1350]: time="2024-06-25T14:15:56.367568491Z" level=info msg="CreateContainer within sandbox \"d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:15:56.378882 containerd[1350]: time="2024-06-25T14:15:56.378841088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7dwb5,Uid:58967aa2-30a2-441d-bdef-2abe02a8e0ec,Namespace:kube-system,Attempt:1,} returns sandbox id \"79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4\"" Jun 25 14:15:56.379524 kubelet[2386]: E0625 14:15:56.379499 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:56.382779 containerd[1350]: time="2024-06-25T14:15:56.382735927Z" level=info msg="CreateContainer within sandbox \"79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:15:56.384064 containerd[1350]: time="2024-06-25T14:15:56.384028727Z" level=info msg="CreateContainer within sandbox \"d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48570547024e219e6052d013cd692d6515487fb43934ebe5155be0d331e90844\"" Jun 25 14:15:56.384458 containerd[1350]: time="2024-06-25T14:15:56.384430807Z" level=info msg="StartContainer for \"48570547024e219e6052d013cd692d6515487fb43934ebe5155be0d331e90844\"" Jun 25 14:15:56.395681 containerd[1350]: time="2024-06-25T14:15:56.395625804Z" level=info msg="CreateContainer within sandbox \"79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8cbb0a85b2c470aa9849ec9db10a1be9d08bd55c3028405f78a48048aef42b66\"" Jun 25 14:15:56.399920 containerd[1350]: time="2024-06-25T14:15:56.399856243Z" level=info msg="StartContainer for \"8cbb0a85b2c470aa9849ec9db10a1be9d08bd55c3028405f78a48048aef42b66\"" Jun 25 14:15:56.445213 containerd[1350]: time="2024-06-25T14:15:56.445073911Z" level=info msg="StartContainer for \"48570547024e219e6052d013cd692d6515487fb43934ebe5155be0d331e90844\" returns successfully" Jun 25 14:15:56.447399 containerd[1350]: time="2024-06-25T14:15:56.447323591Z" level=info msg="StartContainer for \"8cbb0a85b2c470aa9849ec9db10a1be9d08bd55c3028405f78a48048aef42b66\" returns successfully" Jun 25 14:15:56.973253 kubelet[2386]: E0625 14:15:56.973211 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:56.976352 kubelet[2386]: E0625 14:15:56.976326 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:56.984608 kubelet[2386]: I0625 14:15:56.984573 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5rqtj" podStartSLOduration=27.984538053 podCreationTimestamp="2024-06-25 14:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:15:56.983766893 +0000 UTC m=+41.265080919" watchObservedRunningTime="2024-06-25 14:15:56.984538053 +0000 UTC m=+41.265852079" Jun 25 14:15:57.012473 kubelet[2386]: I0625 14:15:57.012435 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-7dwb5" podStartSLOduration=28.012385166 podCreationTimestamp="2024-06-25 14:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:15:57.012242046 +0000 UTC m=+41.293556032" watchObservedRunningTime="2024-06-25 14:15:57.012385166 +0000 UTC m=+41.293699232" Jun 25 14:15:57.021000 audit[4199]: NETFILTER_CFG table=filter:104 family=2 entries=14 op=nft_register_rule pid=4199 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:57.021000 audit[4199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffcdf51710 a2=0 a3=1 items=0 ppid=2550 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:57.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:57.022000 audit[4199]: NETFILTER_CFG table=nat:105 family=2 entries=14 op=nft_register_rule pid=4199 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:57.022000 audit[4199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcdf51710 a2=0 a3=1 items=0 ppid=2550 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:57.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:57.031000 audit[4201]: NETFILTER_CFG table=filter:106 family=2 entries=11 op=nft_register_rule pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:57.031000 audit[4201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd23c0180 a2=0 a3=1 items=0 ppid=2550 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:57.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:57.037000 audit[4201]: NETFILTER_CFG table=nat:107 family=2 entries=47 op=nft_register_chain pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:15:57.037000 audit[4201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd23c0180 a2=0 a3=1 items=0 ppid=2550 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:57.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:15:57.977354 kubelet[2386]: E0625 14:15:57.977287 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:57.977591 kubelet[2386]: E0625 14:15:57.977374 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:58.084066 systemd-networkd[1138]: cali46f4affadcf: Gained IPv6LL Jun 25 14:15:58.148038 systemd-networkd[1138]: cali956e5c18019: Gained IPv6LL Jun 25 14:15:58.979221 kubelet[2386]: E0625 14:15:58.979189 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:58.979657 kubelet[2386]: E0625 14:15:58.979251 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:15:59.502328 systemd[1]: Started sshd@8-10.0.0.23:22-10.0.0.1:51972.service - OpenSSH per-connection server daemon (10.0.0.1:51972). Jun 25 14:15:59.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.23:22-10.0.0.1:51972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:59.508133 kernel: kauditd_printk_skb: 22 callbacks suppressed Jun 25 14:15:59.508350 kernel: audit: type=1130 audit(1719324959.501:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.23:22-10.0.0.1:51972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:59.549000 audit[4215]: USER_ACCT pid=4215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.550654 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 51972 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:15:59.552280 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:15:59.550000 audit[4215]: CRED_ACQ pid=4215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.555454 kernel: audit: type=1101 audit(1719324959.549:323): pid=4215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.555521 kernel: audit: type=1103 audit(1719324959.550:324): pid=4215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.557231 kernel: audit: type=1006 audit(1719324959.550:325): pid=4215 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jun 25 14:15:59.557328 kernel: audit: type=1300 audit(1719324959.550:325): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff768e10 a2=3 a3=1 items=0 ppid=1 pid=4215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:59.550000 audit[4215]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff768e10 a2=3 a3=1 items=0 ppid=1 pid=4215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:59.550000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:15:59.560492 kernel: audit: type=1327 audit(1719324959.550:325): proctitle=737368643A20636F7265205B707269765D Jun 25 14:15:59.563562 systemd-logind[1335]: New session 9 of user core. Jun 25 14:15:59.569171 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 14:15:59.571000 audit[4215]: USER_START pid=4215 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.575000 audit[4218]: CRED_ACQ pid=4218 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.578815 kernel: audit: type=1105 audit(1719324959.571:326): pid=4215 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.578959 kernel: audit: type=1103 audit(1719324959.575:327): pid=4218 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.758239 sshd[4215]: pam_unix(sshd:session): session closed for user core Jun 25 14:15:59.758000 audit[4215]: USER_END pid=4215 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.758000 audit[4215]: CRED_DISP pid=4215 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.762776 systemd[1]: sshd@8-10.0.0.23:22-10.0.0.1:51972.service: Deactivated successfully. Jun 25 14:15:59.764001 systemd-logind[1335]: Session 9 logged out. Waiting for processes to exit. Jun 25 14:15:59.764031 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 14:15:59.764833 systemd-logind[1335]: Removed session 9. Jun 25 14:15:59.764990 kernel: audit: type=1106 audit(1719324959.758:328): pid=4215 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.765035 kernel: audit: type=1104 audit(1719324959.758:329): pid=4215 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:15:59.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.23:22-10.0.0.1:51972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:59.841049 containerd[1350]: time="2024-06-25T14:15:59.840949407Z" level=info msg="StopPodSandbox for \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\"" Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.926 [INFO][4248] k8s.go 608: Cleaning up netns ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.926 [INFO][4248] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" iface="eth0" netns="/var/run/netns/cni-61b2fb06-709a-bb0e-b538-74cbb3a170c7" Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.926 [INFO][4248] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" iface="eth0" netns="/var/run/netns/cni-61b2fb06-709a-bb0e-b538-74cbb3a170c7" Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.926 [INFO][4248] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" iface="eth0" netns="/var/run/netns/cni-61b2fb06-709a-bb0e-b538-74cbb3a170c7" Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.926 [INFO][4248] k8s.go 615: Releasing IP address(es) ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.926 [INFO][4248] utils.go 188: Calico CNI releasing IP address ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.948 [INFO][4256] ipam_plugin.go 411: Releasing address using handleID ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" HandleID="k8s-pod-network.968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.948 [INFO][4256] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.948 [INFO][4256] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.986 [WARNING][4256] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" HandleID="k8s-pod-network.968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.986 [INFO][4256] ipam_plugin.go 439: Releasing address using workloadID ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" HandleID="k8s-pod-network.968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.989 [INFO][4256] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:15:59.993459 containerd[1350]: 2024-06-25 14:15:59.991 [INFO][4248] k8s.go 621: Teardown processing complete. ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:15:59.996705 containerd[1350]: time="2024-06-25T14:15:59.996562934Z" level=info msg="TearDown network for sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\" successfully" Jun 25 14:15:59.996705 containerd[1350]: time="2024-06-25T14:15:59.996605454Z" level=info msg="StopPodSandbox for \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\" returns successfully" Jun 25 14:15:59.996597 systemd[1]: run-netns-cni\x2d61b2fb06\x2d709a\x2dbb0e\x2db538\x2d74cbb3a170c7.mount: Deactivated successfully. Jun 25 14:15:59.997871 containerd[1350]: time="2024-06-25T14:15:59.997839894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8lrw,Uid:ab547801-7d4b-41c4-b3b9-81712e462073,Namespace:calico-system,Attempt:1,}" Jun 25 14:16:00.139237 systemd-networkd[1138]: cali2fb9a0e294d: Link UP Jun 25 14:16:00.140084 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:16:00.140159 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2fb9a0e294d: link becomes ready Jun 25 14:16:00.141405 systemd-networkd[1138]: cali2fb9a0e294d: Gained carrier Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.068 [INFO][4263] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--v8lrw-eth0 csi-node-driver- calico-system ab547801-7d4b-41c4-b3b9-81712e462073 830 0 2024-06-25 14:15:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-v8lrw eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali2fb9a0e294d [] []}} ContainerID="a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" Namespace="calico-system" Pod="csi-node-driver-v8lrw" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8lrw-" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.068 [INFO][4263] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" Namespace="calico-system" Pod="csi-node-driver-v8lrw" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.094 [INFO][4276] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" HandleID="k8s-pod-network.a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.108 [INFO][4276] ipam_plugin.go 264: Auto assigning IP ContainerID="a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" HandleID="k8s-pod-network.a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e5b80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-v8lrw", "timestamp":"2024-06-25 14:16:00.094857035 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.108 [INFO][4276] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.109 [INFO][4276] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.109 [INFO][4276] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.111 [INFO][4276] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" host="localhost" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.115 [INFO][4276] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.119 [INFO][4276] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.121 [INFO][4276] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.123 [INFO][4276] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.123 [INFO][4276] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" host="localhost" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.125 [INFO][4276] ipam.go 1685: Creating new handle: k8s-pod-network.a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3 Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.128 [INFO][4276] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" host="localhost" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.133 [INFO][4276] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" host="localhost" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.133 [INFO][4276] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" host="localhost" Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.133 [INFO][4276] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:16:00.155929 containerd[1350]: 2024-06-25 14:16:00.133 [INFO][4276] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" HandleID="k8s-pod-network.a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:00.156481 containerd[1350]: 2024-06-25 14:16:00.135 [INFO][4263] k8s.go 386: Populated endpoint ContainerID="a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" Namespace="calico-system" Pod="csi-node-driver-v8lrw" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8lrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v8lrw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ab547801-7d4b-41c4-b3b9-81712e462073", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-v8lrw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2fb9a0e294d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:00.156481 containerd[1350]: 2024-06-25 14:16:00.135 [INFO][4263] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" Namespace="calico-system" Pod="csi-node-driver-v8lrw" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:00.156481 containerd[1350]: 2024-06-25 14:16:00.135 [INFO][4263] dataplane_linux.go 68: Setting the host side veth name to cali2fb9a0e294d ContainerID="a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" Namespace="calico-system" Pod="csi-node-driver-v8lrw" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:00.156481 containerd[1350]: 2024-06-25 14:16:00.141 [INFO][4263] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" Namespace="calico-system" Pod="csi-node-driver-v8lrw" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:00.156481 containerd[1350]: 2024-06-25 14:16:00.142 [INFO][4263] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" Namespace="calico-system" Pod="csi-node-driver-v8lrw" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8lrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v8lrw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ab547801-7d4b-41c4-b3b9-81712e462073", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3", Pod:"csi-node-driver-v8lrw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2fb9a0e294d", MAC:"9a:7f:ad:59:63:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:00.156481 containerd[1350]: 2024-06-25 14:16:00.153 [INFO][4263] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3" Namespace="calico-system" Pod="csi-node-driver-v8lrw" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:00.164000 audit[4300]: NETFILTER_CFG table=filter:108 family=2 entries=42 op=nft_register_chain pid=4300 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:16:00.164000 audit[4300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21016 a0=3 a1=ffffd9cae070 a2=0 a3=ffffa7502fa8 items=0 ppid=3462 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:00.164000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:16:00.172354 containerd[1350]: time="2024-06-25T14:16:00.172199619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:16:00.172354 containerd[1350]: time="2024-06-25T14:16:00.172318139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:16:00.172542 containerd[1350]: time="2024-06-25T14:16:00.172337299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:16:00.173036 containerd[1350]: time="2024-06-25T14:16:00.172657939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:16:00.195162 systemd-resolved[1265]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:16:00.205178 containerd[1350]: time="2024-06-25T14:16:00.205136253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8lrw,Uid:ab547801-7d4b-41c4-b3b9-81712e462073,Namespace:calico-system,Attempt:1,} returns sandbox id \"a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3\"" Jun 25 14:16:00.206782 containerd[1350]: time="2024-06-25T14:16:00.206754253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 14:16:01.101064 containerd[1350]: time="2024-06-25T14:16:01.101020357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:01.101609 containerd[1350]: time="2024-06-25T14:16:01.101563237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 14:16:01.102863 containerd[1350]: time="2024-06-25T14:16:01.102827917Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:01.105093 containerd[1350]: time="2024-06-25T14:16:01.105050316Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:01.106489 containerd[1350]: time="2024-06-25T14:16:01.106462076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:01.107661 containerd[1350]: time="2024-06-25T14:16:01.107620236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 900.675823ms" Jun 25 14:16:01.107722 containerd[1350]: time="2024-06-25T14:16:01.107660916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 14:16:01.109626 containerd[1350]: time="2024-06-25T14:16:01.109517835Z" level=info msg="CreateContainer within sandbox \"a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 14:16:01.125452 containerd[1350]: time="2024-06-25T14:16:01.125086593Z" level=info msg="CreateContainer within sandbox \"a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"693ccbd71c84ddbf99d6819a49386f8ec084d44506a06900f2d3b3c92aee439f\"" Jun 25 14:16:01.126092 containerd[1350]: time="2024-06-25T14:16:01.126065752Z" level=info msg="StartContainer for \"693ccbd71c84ddbf99d6819a49386f8ec084d44506a06900f2d3b3c92aee439f\"" Jun 25 14:16:01.182231 containerd[1350]: time="2024-06-25T14:16:01.182186222Z" level=info msg="StartContainer for \"693ccbd71c84ddbf99d6819a49386f8ec084d44506a06900f2d3b3c92aee439f\" returns successfully" Jun 25 14:16:01.184233 containerd[1350]: time="2024-06-25T14:16:01.183520302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 14:16:01.996774 systemd[1]: run-containerd-runc-k8s.io-693ccbd71c84ddbf99d6819a49386f8ec084d44506a06900f2d3b3c92aee439f-runc.vIWfGo.mount: Deactivated successfully. Jun 25 14:16:02.180014 systemd-networkd[1138]: cali2fb9a0e294d: Gained IPv6LL Jun 25 14:16:02.185476 containerd[1350]: time="2024-06-25T14:16:02.185423998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:02.186922 containerd[1350]: time="2024-06-25T14:16:02.186867078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 14:16:02.188011 containerd[1350]: time="2024-06-25T14:16:02.187968998Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:02.191732 containerd[1350]: time="2024-06-25T14:16:02.191691557Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:02.193129 containerd[1350]: time="2024-06-25T14:16:02.193088837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:02.193969 containerd[1350]: time="2024-06-25T14:16:02.193937117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.010381055s" Jun 25 14:16:02.194080 containerd[1350]: time="2024-06-25T14:16:02.194056077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 14:16:02.195996 containerd[1350]: time="2024-06-25T14:16:02.195967396Z" level=info msg="CreateContainer within sandbox \"a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 14:16:02.223287 containerd[1350]: time="2024-06-25T14:16:02.223233912Z" level=info msg="CreateContainer within sandbox \"a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5e50a2cf589eda824bd7cf5491330d70db0a0a1f1b7fb7df7a180f8f95b64a28\"" Jun 25 14:16:02.223740 containerd[1350]: time="2024-06-25T14:16:02.223711152Z" level=info msg="StartContainer for \"5e50a2cf589eda824bd7cf5491330d70db0a0a1f1b7fb7df7a180f8f95b64a28\"" Jun 25 14:16:02.282032 containerd[1350]: time="2024-06-25T14:16:02.281876781Z" level=info msg="StartContainer for \"5e50a2cf589eda824bd7cf5491330d70db0a0a1f1b7fb7df7a180f8f95b64a28\" returns successfully" Jun 25 14:16:02.935357 kubelet[2386]: I0625 14:16:02.935324 2386 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 14:16:02.935740 kubelet[2386]: I0625 14:16:02.935368 2386 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 14:16:03.005263 kubelet[2386]: I0625 14:16:03.005225 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-v8lrw" podStartSLOduration=26.017231312 podCreationTimestamp="2024-06-25 14:15:35 +0000 UTC" firstStartedPulling="2024-06-25 14:16:00.206425253 +0000 UTC m=+44.487739279" lastFinishedPulling="2024-06-25 14:16:02.194369437 +0000 UTC m=+46.475683463" observedRunningTime="2024-06-25 14:16:03.004916976 +0000 UTC m=+47.286231002" watchObservedRunningTime="2024-06-25 14:16:03.005175496 +0000 UTC m=+47.286489522" Jun 25 14:16:04.773397 systemd[1]: Started sshd@9-10.0.0.23:22-10.0.0.1:51984.service - OpenSSH per-connection server daemon (10.0.0.1:51984). Jun 25 14:16:04.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.23:22-10.0.0.1:51984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:04.774580 kernel: kauditd_printk_skb: 4 callbacks suppressed Jun 25 14:16:04.774714 kernel: audit: type=1130 audit(1719324964.772:332): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.23:22-10.0.0.1:51984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:04.810000 audit[4421]: USER_ACCT pid=4421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:04.813178 sshd[4421]: Accepted publickey for core from 10.0.0.1 port 51984 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:16:04.812000 audit[4421]: CRED_ACQ pid=4421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:04.814828 sshd[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:04.817218 kernel: audit: type=1101 audit(1719324964.810:333): pid=4421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:04.817276 kernel: audit: type=1103 audit(1719324964.812:334): pid=4421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:04.817300 kernel: audit: type=1006 audit(1719324964.812:335): pid=4421 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 14:16:04.812000 audit[4421]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd230a790 a2=3 a3=1 items=0 ppid=1 pid=4421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:04.822005 kernel: audit: type=1300 audit(1719324964.812:335): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd230a790 a2=3 a3=1 items=0 ppid=1 pid=4421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:04.822239 kernel: audit: type=1327 audit(1719324964.812:335): proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:04.812000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:04.825168 systemd-logind[1335]: New session 10 of user core. Jun 25 14:16:04.833254 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 14:16:04.836000 audit[4421]: USER_START pid=4421 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:04.839920 kernel: audit: type=1105 audit(1719324964.836:336): pid=4421 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:04.839000 audit[4424]: CRED_ACQ pid=4424 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:04.842914 kernel: audit: type=1103 audit(1719324964.839:337): pid=4424 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.023399 sshd[4421]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:05.023000 audit[4421]: USER_END pid=4421 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.023000 audit[4421]: CRED_DISP pid=4421 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.029929 kernel: audit: type=1106 audit(1719324965.023:338): pid=4421 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.030021 kernel: audit: type=1104 audit(1719324965.023:339): pid=4421 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.039441 systemd[1]: Started sshd@10-10.0.0.23:22-10.0.0.1:52000.service - OpenSSH per-connection server daemon (10.0.0.1:52000). Jun 25 14:16:05.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.23:22-10.0.0.1:52000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:05.040079 systemd[1]: sshd@9-10.0.0.23:22-10.0.0.1:51984.service: Deactivated successfully. Jun 25 14:16:05.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.23:22-10.0.0.1:51984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:05.041123 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 14:16:05.041725 systemd-logind[1335]: Session 10 logged out. Waiting for processes to exit. Jun 25 14:16:05.042570 systemd-logind[1335]: Removed session 10. Jun 25 14:16:05.069000 audit[4435]: USER_ACCT pid=4435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.070208 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 52000 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:16:05.070000 audit[4435]: CRED_ACQ pid=4435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.070000 audit[4435]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4bd1ca0 a2=3 a3=1 items=0 ppid=1 pid=4435 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:05.070000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:05.071609 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:05.075930 systemd-logind[1335]: New session 11 of user core. Jun 25 14:16:05.089225 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 14:16:05.092000 audit[4435]: USER_START pid=4435 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.094000 audit[4439]: CRED_ACQ pid=4439 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.435331 sshd[4435]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:05.436000 audit[4435]: USER_END pid=4435 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.436000 audit[4435]: CRED_DISP pid=4435 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.440401 systemd[1]: Started sshd@11-10.0.0.23:22-10.0.0.1:52012.service - OpenSSH per-connection server daemon (10.0.0.1:52012). Jun 25 14:16:05.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.23:22-10.0.0.1:52012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:05.441058 systemd[1]: sshd@10-10.0.0.23:22-10.0.0.1:52000.service: Deactivated successfully. Jun 25 14:16:05.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.23:22-10.0.0.1:52000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:05.445179 systemd-logind[1335]: Session 11 logged out. Waiting for processes to exit. Jun 25 14:16:05.446732 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 14:16:05.452030 systemd-logind[1335]: Removed session 11. Jun 25 14:16:05.492644 sshd[4449]: Accepted publickey for core from 10.0.0.1 port 52012 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:16:05.491000 audit[4449]: USER_ACCT pid=4449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.494160 sshd[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:05.492000 audit[4449]: CRED_ACQ pid=4449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.492000 audit[4449]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd08936e0 a2=3 a3=1 items=0 ppid=1 pid=4449 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:05.492000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:05.498245 systemd-logind[1335]: New session 12 of user core. Jun 25 14:16:05.508191 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 14:16:05.511000 audit[4449]: USER_START pid=4449 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.512000 audit[4454]: CRED_ACQ pid=4454 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.717748 sshd[4449]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:05.718000 audit[4449]: USER_END pid=4449 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.718000 audit[4449]: CRED_DISP pid=4449 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:05.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.23:22-10.0.0.1:52012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:05.723997 systemd[1]: sshd@11-10.0.0.23:22-10.0.0.1:52012.service: Deactivated successfully. Jun 25 14:16:05.725814 systemd-logind[1335]: Session 12 logged out. Waiting for processes to exit. Jun 25 14:16:05.725913 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 14:16:05.727045 systemd-logind[1335]: Removed session 12. Jun 25 14:16:10.724485 systemd[1]: Started sshd@12-10.0.0.23:22-10.0.0.1:53910.service - OpenSSH per-connection server daemon (10.0.0.1:53910). Jun 25 14:16:10.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.23:22-10.0.0.1:53910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:10.727613 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 14:16:10.727727 kernel: audit: type=1130 audit(1719324970.723:359): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.23:22-10.0.0.1:53910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:10.756000 audit[4483]: USER_ACCT pid=4483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.757370 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 53910 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:16:10.759256 sshd[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:10.757000 audit[4483]: CRED_ACQ pid=4483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.762758 kernel: audit: type=1101 audit(1719324970.756:360): pid=4483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.762853 kernel: audit: type=1103 audit(1719324970.757:361): pid=4483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.762880 kernel: audit: type=1006 audit(1719324970.758:362): pid=4483 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jun 25 14:16:10.758000 audit[4483]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe06e2580 a2=3 a3=1 items=0 ppid=1 pid=4483 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:10.767909 kernel: audit: type=1300 audit(1719324970.758:362): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe06e2580 a2=3 a3=1 items=0 ppid=1 pid=4483 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:10.767992 kernel: audit: type=1327 audit(1719324970.758:362): proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:10.758000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:10.768606 systemd-logind[1335]: New session 13 of user core. Jun 25 14:16:10.775254 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 14:16:10.778000 audit[4483]: USER_START pid=4483 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.780000 audit[4486]: CRED_ACQ pid=4486 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.785787 kernel: audit: type=1105 audit(1719324970.778:363): pid=4483 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.785875 kernel: audit: type=1103 audit(1719324970.780:364): pid=4486 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.919914 sshd[4483]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:10.922000 audit[4483]: USER_END pid=4483 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.922000 audit[4483]: CRED_DISP pid=4483 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.928347 kernel: audit: type=1106 audit(1719324970.922:365): pid=4483 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.928409 kernel: audit: type=1104 audit(1719324970.922:366): pid=4483 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.934396 systemd[1]: Started sshd@13-10.0.0.23:22-10.0.0.1:53926.service - OpenSSH per-connection server daemon (10.0.0.1:53926). Jun 25 14:16:10.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.23:22-10.0.0.1:53926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:10.935068 systemd[1]: sshd@12-10.0.0.23:22-10.0.0.1:53910.service: Deactivated successfully. Jun 25 14:16:10.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.23:22-10.0.0.1:53910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:10.936448 systemd-logind[1335]: Session 13 logged out. Waiting for processes to exit. Jun 25 14:16:10.936528 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 14:16:10.937326 systemd-logind[1335]: Removed session 13. Jun 25 14:16:10.964000 audit[4495]: USER_ACCT pid=4495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.965559 sshd[4495]: Accepted publickey for core from 10.0.0.1 port 53926 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:16:10.965000 audit[4495]: CRED_ACQ pid=4495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.965000 audit[4495]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe94e75c0 a2=3 a3=1 items=0 ppid=1 pid=4495 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:10.965000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:10.966860 sshd[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:10.970561 systemd-logind[1335]: New session 14 of user core. Jun 25 14:16:10.980233 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 14:16:10.983000 audit[4495]: USER_START pid=4495 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:10.985000 audit[4500]: CRED_ACQ pid=4500 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:11.228193 sshd[4495]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:11.229000 audit[4495]: USER_END pid=4495 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:11.229000 audit[4495]: CRED_DISP pid=4495 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:11.237454 systemd[1]: Started sshd@14-10.0.0.23:22-10.0.0.1:53930.service - OpenSSH per-connection server daemon (10.0.0.1:53930). Jun 25 14:16:11.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.23:22-10.0.0.1:53930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:11.238122 systemd[1]: sshd@13-10.0.0.23:22-10.0.0.1:53926.service: Deactivated successfully. Jun 25 14:16:11.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.23:22-10.0.0.1:53926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:11.239673 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 14:16:11.239677 systemd-logind[1335]: Session 14 logged out. Waiting for processes to exit. Jun 25 14:16:11.243354 systemd-logind[1335]: Removed session 14. Jun 25 14:16:11.276000 audit[4507]: USER_ACCT pid=4507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:11.277622 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 53930 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:16:11.278000 audit[4507]: CRED_ACQ pid=4507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:11.278000 audit[4507]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec34a8a0 a2=3 a3=1 items=0 ppid=1 pid=4507 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:11.278000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:11.279478 sshd[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:11.283949 systemd-logind[1335]: New session 15 of user core. Jun 25 14:16:11.295180 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 14:16:11.299000 audit[4507]: USER_START pid=4507 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:11.302000 audit[4512]: CRED_ACQ pid=4512 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:11.992000 audit[4525]: NETFILTER_CFG table=filter:109 family=2 entries=20 op=nft_register_rule pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:11.992000 audit[4525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=fffffa2c6f20 a2=0 a3=1 items=0 ppid=2550 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:11.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:11.996000 audit[4525]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:11.996000 audit[4525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffa2c6f20 a2=0 a3=1 items=0 ppid=2550 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:11.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:12.008456 sshd[4507]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:12.009000 audit[4507]: USER_END pid=4507 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.009000 audit[4507]: CRED_DISP pid=4507 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.014405 systemd[1]: Started sshd@15-10.0.0.23:22-10.0.0.1:53938.service - OpenSSH per-connection server daemon (10.0.0.1:53938). Jun 25 14:16:12.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.23:22-10.0.0.1:53938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.015100 systemd[1]: sshd@14-10.0.0.23:22-10.0.0.1:53930.service: Deactivated successfully. Jun 25 14:16:12.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.23:22-10.0.0.1:53930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.016822 systemd-logind[1335]: Session 15 logged out. Waiting for processes to exit. Jun 25 14:16:12.016926 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 14:16:12.018084 systemd-logind[1335]: Removed session 15. Jun 25 14:16:12.021000 audit[4530]: NETFILTER_CFG table=filter:111 family=2 entries=32 op=nft_register_rule pid=4530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:12.021000 audit[4530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffe3457cd0 a2=0 a3=1 items=0 ppid=2550 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:12.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:12.022000 audit[4530]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=4530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:12.022000 audit[4530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe3457cd0 a2=0 a3=1 items=0 ppid=2550 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:12.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:12.050000 audit[4527]: USER_ACCT pid=4527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.051577 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 53938 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:16:12.051000 audit[4527]: CRED_ACQ pid=4527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.051000 audit[4527]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcce7c380 a2=3 a3=1 items=0 ppid=1 pid=4527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:12.051000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:12.053309 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:12.060197 systemd-logind[1335]: New session 16 of user core. Jun 25 14:16:12.065194 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 14:16:12.070000 audit[4527]: USER_START pid=4527 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.071000 audit[4533]: CRED_ACQ pid=4533 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.502185 sshd[4527]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:12.503000 audit[4527]: USER_END pid=4527 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.504000 audit[4527]: CRED_DISP pid=4527 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.510497 systemd[1]: Started sshd@16-10.0.0.23:22-10.0.0.1:53954.service - OpenSSH per-connection server daemon (10.0.0.1:53954). Jun 25 14:16:12.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.23:22-10.0.0.1:53954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.511254 systemd[1]: sshd@15-10.0.0.23:22-10.0.0.1:53938.service: Deactivated successfully. Jun 25 14:16:12.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.23:22-10.0.0.1:53938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.512469 systemd-logind[1335]: Session 16 logged out. Waiting for processes to exit. Jun 25 14:16:12.512523 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 14:16:12.513335 systemd-logind[1335]: Removed session 16. Jun 25 14:16:12.544000 audit[4541]: USER_ACCT pid=4541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.545573 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 53954 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:16:12.545000 audit[4541]: CRED_ACQ pid=4541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.545000 audit[4541]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb7d8a00 a2=3 a3=1 items=0 ppid=1 pid=4541 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:12.545000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:12.546925 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:12.550802 systemd-logind[1335]: New session 17 of user core. Jun 25 14:16:12.561167 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 14:16:12.564000 audit[4541]: USER_START pid=4541 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.565000 audit[4546]: CRED_ACQ pid=4546 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.701633 sshd[4541]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:12.701000 audit[4541]: USER_END pid=4541 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.701000 audit[4541]: CRED_DISP pid=4541 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:12.704208 systemd[1]: sshd@16-10.0.0.23:22-10.0.0.1:53954.service: Deactivated successfully. Jun 25 14:16:12.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.23:22-10.0.0.1:53954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.705342 systemd-logind[1335]: Session 17 logged out. Waiting for processes to exit. Jun 25 14:16:12.705421 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 14:16:12.706259 systemd-logind[1335]: Removed session 17. Jun 25 14:16:14.275572 systemd[1]: run-containerd-runc-k8s.io-9a86ab285269a4dec9eedcf131a1fcf848a915edbb42b984b11998874d720c76-runc.47Zbv1.mount: Deactivated successfully. Jun 25 14:16:15.803340 containerd[1350]: time="2024-06-25T14:16:15.803272231Z" level=info msg="StopPodSandbox for \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\"" Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.838 [WARNING][4595] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--5rqtj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5e87b194-5eb9-4034-8536-a78f10e6f560", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191", Pod:"coredns-5dd5756b68-5rqtj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali956e5c18019", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.839 [INFO][4595] k8s.go 608: Cleaning up netns ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.839 [INFO][4595] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" iface="eth0" netns="" Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.839 [INFO][4595] k8s.go 615: Releasing IP address(es) ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.839 [INFO][4595] utils.go 188: Calico CNI releasing IP address ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.861 [INFO][4603] ipam_plugin.go 411: Releasing address using handleID ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" HandleID="k8s-pod-network.61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.862 [INFO][4603] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.862 [INFO][4603] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.870 [WARNING][4603] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" HandleID="k8s-pod-network.61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.870 [INFO][4603] ipam_plugin.go 439: Releasing address using workloadID ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" HandleID="k8s-pod-network.61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.872 [INFO][4603] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:16:15.874963 containerd[1350]: 2024-06-25 14:16:15.873 [INFO][4595] k8s.go 621: Teardown processing complete. ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:16:15.875491 containerd[1350]: time="2024-06-25T14:16:15.875456306Z" level=info msg="TearDown network for sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\" successfully" Jun 25 14:16:15.875553 containerd[1350]: time="2024-06-25T14:16:15.875538106Z" level=info msg="StopPodSandbox for \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\" returns successfully" Jun 25 14:16:15.876400 containerd[1350]: time="2024-06-25T14:16:15.876369266Z" level=info msg="RemovePodSandbox for \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\"" Jun 25 14:16:15.881615 containerd[1350]: time="2024-06-25T14:16:15.876408386Z" level=info msg="Forcibly stopping sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\"" Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.918 [WARNING][4627] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--5rqtj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5e87b194-5eb9-4034-8536-a78f10e6f560", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d454289d4c491085f81f225678eec39a6918cfe895ea61f2f0f8c85fbe641191", Pod:"coredns-5dd5756b68-5rqtj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali956e5c18019", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.918 [INFO][4627] k8s.go 608: Cleaning up netns ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.918 [INFO][4627] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" iface="eth0" netns="" Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.918 [INFO][4627] k8s.go 615: Releasing IP address(es) ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.918 [INFO][4627] utils.go 188: Calico CNI releasing IP address ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.937 [INFO][4635] ipam_plugin.go 411: Releasing address using handleID ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" HandleID="k8s-pod-network.61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.937 [INFO][4635] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.937 [INFO][4635] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.951 [WARNING][4635] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" HandleID="k8s-pod-network.61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.951 [INFO][4635] ipam_plugin.go 439: Releasing address using workloadID ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" HandleID="k8s-pod-network.61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Workload="localhost-k8s-coredns--5dd5756b68--5rqtj-eth0" Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.952 [INFO][4635] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:16:15.958937 containerd[1350]: 2024-06-25 14:16:15.957 [INFO][4627] k8s.go 621: Teardown processing complete. ContainerID="61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d" Jun 25 14:16:15.959464 containerd[1350]: time="2024-06-25T14:16:15.959430500Z" level=info msg="TearDown network for sandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\" successfully" Jun 25 14:16:15.962372 containerd[1350]: time="2024-06-25T14:16:15.962339059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:16:15.962593 containerd[1350]: time="2024-06-25T14:16:15.962567299Z" level=info msg="RemovePodSandbox \"61b74a83ce337ae90dc2a06c6364ca8f1f8f7392ccfa67d800a8427b2d2f124d\" returns successfully" Jun 25 14:16:15.963111 containerd[1350]: time="2024-06-25T14:16:15.963084619Z" level=info msg="StopPodSandbox for \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\"" Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:15.997 [WARNING][4658] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--7dwb5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"58967aa2-30a2-441d-bdef-2abe02a8e0ec", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4", Pod:"coredns-5dd5756b68-7dwb5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46f4affadcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:15.998 [INFO][4658] k8s.go 608: Cleaning up netns ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:15.998 [INFO][4658] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" iface="eth0" netns="" Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:15.998 [INFO][4658] k8s.go 615: Releasing IP address(es) ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:15.998 [INFO][4658] utils.go 188: Calico CNI releasing IP address ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:16.016 [INFO][4665] ipam_plugin.go 411: Releasing address using handleID ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" HandleID="k8s-pod-network.254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:16.016 [INFO][4665] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:16.016 [INFO][4665] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:16.025 [WARNING][4665] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" HandleID="k8s-pod-network.254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:16.025 [INFO][4665] ipam_plugin.go 439: Releasing address using workloadID ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" HandleID="k8s-pod-network.254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:16.027 [INFO][4665] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:16:16.030190 containerd[1350]: 2024-06-25 14:16:16.028 [INFO][4658] k8s.go 621: Teardown processing complete. ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:16:16.030756 containerd[1350]: time="2024-06-25T14:16:16.030713894Z" level=info msg="TearDown network for sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\" successfully" Jun 25 14:16:16.030830 containerd[1350]: time="2024-06-25T14:16:16.030815374Z" level=info msg="StopPodSandbox for \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\" returns successfully" Jun 25 14:16:16.031448 containerd[1350]: time="2024-06-25T14:16:16.031376814Z" level=info msg="RemovePodSandbox for \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\"" Jun 25 14:16:16.031510 containerd[1350]: time="2024-06-25T14:16:16.031449694Z" level=info msg="Forcibly stopping sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\"" Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.068 [WARNING][4689] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--7dwb5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"58967aa2-30a2-441d-bdef-2abe02a8e0ec", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79cd34b2522552e69c77e8f14ba41a41a9232ea5876fdc986036e69c2b1a00f4", Pod:"coredns-5dd5756b68-7dwb5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46f4affadcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.068 [INFO][4689] k8s.go 608: Cleaning up netns ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.068 [INFO][4689] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" iface="eth0" netns="" Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.068 [INFO][4689] k8s.go 615: Releasing IP address(es) ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.068 [INFO][4689] utils.go 188: Calico CNI releasing IP address ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.095 [INFO][4697] ipam_plugin.go 411: Releasing address using handleID ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" HandleID="k8s-pod-network.254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.095 [INFO][4697] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.095 [INFO][4697] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.103 [WARNING][4697] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" HandleID="k8s-pod-network.254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.103 [INFO][4697] ipam_plugin.go 439: Releasing address using workloadID ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" HandleID="k8s-pod-network.254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Workload="localhost-k8s-coredns--5dd5756b68--7dwb5-eth0" Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.105 [INFO][4697] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:16:16.108623 containerd[1350]: 2024-06-25 14:16:16.106 [INFO][4689] k8s.go 621: Teardown processing complete. ContainerID="254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b" Jun 25 14:16:16.109938 containerd[1350]: time="2024-06-25T14:16:16.109884769Z" level=info msg="TearDown network for sandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\" successfully" Jun 25 14:16:16.116117 containerd[1350]: time="2024-06-25T14:16:16.116081448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:16:16.116295 containerd[1350]: time="2024-06-25T14:16:16.116271208Z" level=info msg="RemovePodSandbox \"254b484894bf5ece82809568682b831f5ff0a66dc81805bc6a790d26adf4e15b\" returns successfully" Jun 25 14:16:16.116847 containerd[1350]: time="2024-06-25T14:16:16.116821528Z" level=info msg="StopPodSandbox for \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\"" Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.151 [WARNING][4719] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0", GenerateName:"calico-kube-controllers-668cb8f956-", Namespace:"calico-system", SelfLink:"", UID:"5ba46353-450d-46a3-a19c-54c7d8f17c69", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668cb8f956", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2", Pod:"calico-kube-controllers-668cb8f956-lh8rc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3dd6b29008c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.151 [INFO][4719] k8s.go 608: Cleaning up netns ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.151 [INFO][4719] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" iface="eth0" netns="" Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.151 [INFO][4719] k8s.go 615: Releasing IP address(es) ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.151 [INFO][4719] utils.go 188: Calico CNI releasing IP address ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.169 [INFO][4727] ipam_plugin.go 411: Releasing address using handleID ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" HandleID="k8s-pod-network.877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.170 [INFO][4727] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.170 [INFO][4727] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.178 [WARNING][4727] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" HandleID="k8s-pod-network.877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.178 [INFO][4727] ipam_plugin.go 439: Releasing address using workloadID ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" HandleID="k8s-pod-network.877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.179 [INFO][4727] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:16:16.185509 containerd[1350]: 2024-06-25 14:16:16.183 [INFO][4719] k8s.go 621: Teardown processing complete. ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:16:16.186406 containerd[1350]: time="2024-06-25T14:16:16.185533084Z" level=info msg="TearDown network for sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\" successfully" Jun 25 14:16:16.186406 containerd[1350]: time="2024-06-25T14:16:16.185563724Z" level=info msg="StopPodSandbox for \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\" returns successfully" Jun 25 14:16:16.186406 containerd[1350]: time="2024-06-25T14:16:16.186205124Z" level=info msg="RemovePodSandbox for \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\"" Jun 25 14:16:16.186406 containerd[1350]: time="2024-06-25T14:16:16.186243204Z" level=info msg="Forcibly stopping sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\"" Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.227 [WARNING][4748] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0", GenerateName:"calico-kube-controllers-668cb8f956-", Namespace:"calico-system", SelfLink:"", UID:"5ba46353-450d-46a3-a19c-54c7d8f17c69", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668cb8f956", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e323e060d344bc49066600dee5e1ee50ec0aa59a8e1bbc1d950d23ee47177d2", Pod:"calico-kube-controllers-668cb8f956-lh8rc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3dd6b29008c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.228 [INFO][4748] k8s.go 608: Cleaning up netns ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.228 [INFO][4748] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" iface="eth0" netns="" Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.228 [INFO][4748] k8s.go 615: Releasing IP address(es) ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.228 [INFO][4748] utils.go 188: Calico CNI releasing IP address ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.248 [INFO][4756] ipam_plugin.go 411: Releasing address using handleID ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" HandleID="k8s-pod-network.877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.248 [INFO][4756] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.248 [INFO][4756] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.256 [WARNING][4756] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" HandleID="k8s-pod-network.877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.256 [INFO][4756] ipam_plugin.go 439: Releasing address using workloadID ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" HandleID="k8s-pod-network.877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Workload="localhost-k8s-calico--kube--controllers--668cb8f956--lh8rc-eth0" Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.258 [INFO][4756] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:16:16.261209 containerd[1350]: 2024-06-25 14:16:16.259 [INFO][4748] k8s.go 621: Teardown processing complete. ContainerID="877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168" Jun 25 14:16:16.261634 containerd[1350]: time="2024-06-25T14:16:16.261245798Z" level=info msg="TearDown network for sandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\" successfully" Jun 25 14:16:16.263873 containerd[1350]: time="2024-06-25T14:16:16.263833278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:16:16.263955 containerd[1350]: time="2024-06-25T14:16:16.263913118Z" level=info msg="RemovePodSandbox \"877f9371c7f227a54268577a7cb46604f1abe1ee54cf5fd5b6617861cf605168\" returns successfully" Jun 25 14:16:16.264355 containerd[1350]: time="2024-06-25T14:16:16.264329198Z" level=info msg="StopPodSandbox for \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\"" Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.297 [WARNING][4778] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v8lrw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ab547801-7d4b-41c4-b3b9-81712e462073", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3", Pod:"csi-node-driver-v8lrw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2fb9a0e294d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.297 [INFO][4778] k8s.go 608: Cleaning up netns ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.297 [INFO][4778] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" iface="eth0" netns="" Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.298 [INFO][4778] k8s.go 615: Releasing IP address(es) ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.298 [INFO][4778] utils.go 188: Calico CNI releasing IP address ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.319 [INFO][4785] ipam_plugin.go 411: Releasing address using handleID ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" HandleID="k8s-pod-network.968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.319 [INFO][4785] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.319 [INFO][4785] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.327 [WARNING][4785] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" HandleID="k8s-pod-network.968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.327 [INFO][4785] ipam_plugin.go 439: Releasing address using workloadID ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" HandleID="k8s-pod-network.968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.329 [INFO][4785] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:16:16.332054 containerd[1350]: 2024-06-25 14:16:16.330 [INFO][4778] k8s.go 621: Teardown processing complete. ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:16:16.332478 containerd[1350]: time="2024-06-25T14:16:16.332122953Z" level=info msg="TearDown network for sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\" successfully" Jun 25 14:16:16.332478 containerd[1350]: time="2024-06-25T14:16:16.332154633Z" level=info msg="StopPodSandbox for \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\" returns successfully" Jun 25 14:16:16.332666 containerd[1350]: time="2024-06-25T14:16:16.332642273Z" level=info msg="RemovePodSandbox for \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\"" Jun 25 14:16:16.332754 containerd[1350]: time="2024-06-25T14:16:16.332713433Z" level=info msg="Forcibly stopping sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\"" Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.367 [WARNING][4809] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v8lrw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ab547801-7d4b-41c4-b3b9-81712e462073", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 15, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a59c8543f4885bb64d718a3eb0fe5bdeb9f4d05c3371b1e21fbc4db191574cf3", Pod:"csi-node-driver-v8lrw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2fb9a0e294d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.367 [INFO][4809] k8s.go 608: Cleaning up netns ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.367 [INFO][4809] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" iface="eth0" netns="" Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.368 [INFO][4809] k8s.go 615: Releasing IP address(es) ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.368 [INFO][4809] utils.go 188: Calico CNI releasing IP address ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.386 [INFO][4817] ipam_plugin.go 411: Releasing address using handleID ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" HandleID="k8s-pod-network.968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.387 [INFO][4817] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.387 [INFO][4817] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.395 [WARNING][4817] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" HandleID="k8s-pod-network.968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.395 [INFO][4817] ipam_plugin.go 439: Releasing address using workloadID ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" HandleID="k8s-pod-network.968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Workload="localhost-k8s-csi--node--driver--v8lrw-eth0" Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.397 [INFO][4817] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:16:16.399659 containerd[1350]: 2024-06-25 14:16:16.398 [INFO][4809] k8s.go 621: Teardown processing complete. ContainerID="968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282" Jun 25 14:16:16.401132 containerd[1350]: time="2024-06-25T14:16:16.401092428Z" level=info msg="TearDown network for sandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\" successfully" Jun 25 14:16:16.403886 containerd[1350]: time="2024-06-25T14:16:16.403853348Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:16:16.404043 containerd[1350]: time="2024-06-25T14:16:16.404019588Z" level=info msg="RemovePodSandbox \"968154b540b92ab21cb0f46e8ff8682d4af75da114686819b5d3c6a2cb615282\" returns successfully" Jun 25 14:16:16.591317 kernel: kauditd_printk_skb: 57 callbacks suppressed Jun 25 14:16:16.591441 kernel: audit: type=1325 audit(1719324976.588:408): table=filter:113 family=2 entries=20 op=nft_register_rule pid=4826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:16.591468 kernel: audit: type=1300 audit(1719324976.588:408): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc2d6d440 a2=0 a3=1 items=0 ppid=2550 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:16.588000 audit[4826]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=4826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:16.588000 audit[4826]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc2d6d440 a2=0 a3=1 items=0 ppid=2550 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:16.593803 kernel: audit: type=1327 audit(1719324976.588:408): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:16.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:16.588000 audit[4826]: NETFILTER_CFG table=nat:114 family=2 entries=104 op=nft_register_chain pid=4826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:16.588000 audit[4826]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc2d6d440 a2=0 a3=1 items=0 ppid=2550 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:16.605044 kernel: audit: type=1325 audit(1719324976.588:409): table=nat:114 family=2 entries=104 op=nft_register_chain pid=4826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:16.605107 kernel: audit: type=1300 audit(1719324976.588:409): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc2d6d440 a2=0 a3=1 items=0 ppid=2550 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:16.605140 kernel: audit: type=1327 audit(1719324976.588:409): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:16.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:17.712035 systemd[1]: Started sshd@17-10.0.0.23:22-10.0.0.1:53970.service - OpenSSH per-connection server daemon (10.0.0.1:53970). Jun 25 14:16:17.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.23:22-10.0.0.1:53970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:17.714931 kernel: audit: type=1130 audit(1719324977.711:410): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.23:22-10.0.0.1:53970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:17.743000 audit[4828]: USER_ACCT pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:17.744502 sshd[4828]: Accepted publickey for core from 10.0.0.1 port 53970 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:16:17.746304 sshd[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:17.745000 audit[4828]: CRED_ACQ pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:17.749340 kernel: audit: type=1101 audit(1719324977.743:411): pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:17.749404 kernel: audit: type=1103 audit(1719324977.745:412): pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:17.750987 kernel: audit: type=1006 audit(1719324977.745:413): pid=4828 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jun 25 14:16:17.745000 audit[4828]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffda0b9af0 a2=3 a3=1 items=0 ppid=1 pid=4828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:17.745000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:17.754046 systemd-logind[1335]: New session 18 of user core. Jun 25 14:16:17.762196 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 14:16:17.765000 audit[4828]: USER_START pid=4828 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:17.767000 audit[4831]: CRED_ACQ pid=4831 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:17.880701 sshd[4828]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:17.880000 audit[4828]: USER_END pid=4828 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:17.881000 audit[4828]: CRED_DISP pid=4828 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:17.884952 systemd[1]: sshd@17-10.0.0.23:22-10.0.0.1:53970.service: Deactivated successfully. Jun 25 14:16:17.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.23:22-10.0.0.1:53970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:17.886271 systemd-logind[1335]: Session 18 logged out. Waiting for processes to exit. Jun 25 14:16:17.886433 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 14:16:17.887264 systemd-logind[1335]: Removed session 18. Jun 25 14:16:22.891450 systemd[1]: Started sshd@18-10.0.0.23:22-10.0.0.1:52574.service - OpenSSH per-connection server daemon (10.0.0.1:52574). Jun 25 14:16:22.893014 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 14:16:22.893090 kernel: audit: type=1130 audit(1719324982.890:419): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.23:22-10.0.0.1:52574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:22.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.23:22-10.0.0.1:52574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:22.921000 audit[4871]: USER_ACCT pid=4871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:22.922234 sshd[4871]: Accepted publickey for core from 10.0.0.1 port 52574 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:16:22.923810 sshd[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:22.922000 audit[4871]: CRED_ACQ pid=4871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:22.926538 kernel: audit: type=1101 audit(1719324982.921:420): pid=4871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:22.926593 kernel: audit: type=1103 audit(1719324982.922:421): pid=4871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:22.926623 kernel: audit: type=1006 audit(1719324982.922:422): pid=4871 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jun 25 14:16:22.922000 audit[4871]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd445cbf0 a2=3 a3=1 items=0 ppid=1 pid=4871 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:22.928303 systemd-logind[1335]: New session 19 of user core. Jun 25 14:16:22.933201 kernel: audit: type=1300 audit(1719324982.922:422): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd445cbf0 a2=3 a3=1 items=0 ppid=1 pid=4871 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:22.933245 kernel: audit: type=1327 audit(1719324982.922:422): proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:22.922000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:22.933262 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 14:16:22.936000 audit[4871]: USER_START pid=4871 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:22.937000 audit[4874]: CRED_ACQ pid=4874 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:22.942621 kernel: audit: type=1105 audit(1719324982.936:423): pid=4871 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:22.942689 kernel: audit: type=1103 audit(1719324982.937:424): pid=4874 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:23.059013 sshd[4871]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:23.059000 audit[4871]: USER_END pid=4871 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:23.062488 systemd[1]: sshd@18-10.0.0.23:22-10.0.0.1:52574.service: Deactivated successfully. Jun 25 14:16:23.060000 audit[4871]: CRED_DISP pid=4871 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:23.063664 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 14:16:23.064035 systemd-logind[1335]: Session 19 logged out. Waiting for processes to exit. Jun 25 14:16:23.064740 kernel: audit: type=1106 audit(1719324983.059:425): pid=4871 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:23.064815 kernel: audit: type=1104 audit(1719324983.060:426): pid=4871 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:23.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.23:22-10.0.0.1:52574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:23.064991 systemd-logind[1335]: Removed session 19. Jun 25 14:16:23.560466 systemd[1]: run-containerd-runc-k8s.io-9a86ab285269a4dec9eedcf131a1fcf848a915edbb42b984b11998874d720c76-runc.6Dls1d.mount: Deactivated successfully. Jun 25 14:16:23.980000 audit[4908]: NETFILTER_CFG table=filter:115 family=2 entries=9 op=nft_register_rule pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:23.980000 audit[4908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff7c54b40 a2=0 a3=1 items=0 ppid=2550 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:23.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:23.991297 kubelet[2386]: I0625 14:16:23.991252 2386 topology_manager.go:215] "Topology Admit Handler" podUID="4704064d-a4d4-4bda-b0ca-ec6d2988d446" podNamespace="calico-apiserver" podName="calico-apiserver-868fd47f59-zs4f4" Jun 25 14:16:23.984000 audit[4908]: NETFILTER_CFG table=nat:116 family=2 entries=44 op=nft_register_rule pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:23.984000 audit[4908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14988 a0=3 a1=fffff7c54b40 a2=0 a3=1 items=0 ppid=2550 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:23.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:24.014000 audit[4910]: NETFILTER_CFG table=filter:117 family=2 entries=10 op=nft_register_rule pid=4910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:24.014000 audit[4910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc162df50 a2=0 a3=1 items=0 ppid=2550 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:24.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:24.016000 audit[4910]: NETFILTER_CFG table=nat:118 family=2 entries=44 op=nft_register_rule pid=4910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:24.016000 audit[4910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14988 a0=3 a1=ffffc162df50 a2=0 a3=1 items=0 ppid=2550 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:24.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:24.143813 kubelet[2386]: I0625 14:16:24.143780 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4704064d-a4d4-4bda-b0ca-ec6d2988d446-calico-apiserver-certs\") pod \"calico-apiserver-868fd47f59-zs4f4\" (UID: \"4704064d-a4d4-4bda-b0ca-ec6d2988d446\") " pod="calico-apiserver/calico-apiserver-868fd47f59-zs4f4" Jun 25 14:16:24.143953 kubelet[2386]: I0625 14:16:24.143825 2386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhc9s\" (UniqueName: \"kubernetes.io/projected/4704064d-a4d4-4bda-b0ca-ec6d2988d446-kube-api-access-lhc9s\") pod \"calico-apiserver-868fd47f59-zs4f4\" (UID: \"4704064d-a4d4-4bda-b0ca-ec6d2988d446\") " pod="calico-apiserver/calico-apiserver-868fd47f59-zs4f4" Jun 25 14:16:24.245305 kubelet[2386]: E0625 14:16:24.245274 2386 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 14:16:24.254394 kubelet[2386]: E0625 14:16:24.254347 2386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4704064d-a4d4-4bda-b0ca-ec6d2988d446-calico-apiserver-certs podName:4704064d-a4d4-4bda-b0ca-ec6d2988d446 nodeName:}" failed. No retries permitted until 2024-06-25 14:16:24.747700384 +0000 UTC m=+69.029014410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/4704064d-a4d4-4bda-b0ca-ec6d2988d446-calico-apiserver-certs") pod "calico-apiserver-868fd47f59-zs4f4" (UID: "4704064d-a4d4-4bda-b0ca-ec6d2988d446") : secret "calico-apiserver-certs" not found Jun 25 14:16:24.894797 containerd[1350]: time="2024-06-25T14:16:24.894750658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868fd47f59-zs4f4,Uid:4704064d-a4d4-4bda-b0ca-ec6d2988d446,Namespace:calico-apiserver,Attempt:0,}" Jun 25 14:16:25.033598 systemd-networkd[1138]: cali99fc84fe020: Link UP Jun 25 14:16:25.038539 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:16:25.038611 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali99fc84fe020: link becomes ready Jun 25 14:16:25.036868 systemd-networkd[1138]: cali99fc84fe020: Gained carrier Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:24.941 [INFO][4913] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0 calico-apiserver-868fd47f59- calico-apiserver 4704064d-a4d4-4bda-b0ca-ec6d2988d446 1055 0 2024-06-25 14:16:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:868fd47f59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-868fd47f59-zs4f4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali99fc84fe020 [] []}} ContainerID="115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" Namespace="calico-apiserver" Pod="calico-apiserver-868fd47f59-zs4f4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868fd47f59--zs4f4-" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:24.941 [INFO][4913] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" Namespace="calico-apiserver" Pod="calico-apiserver-868fd47f59-zs4f4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:24.979 [INFO][4926] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" HandleID="k8s-pod-network.115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" Workload="localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:24.990 [INFO][4926] ipam_plugin.go 264: Auto assigning IP ContainerID="115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" HandleID="k8s-pod-network.115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" Workload="localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000129d40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-868fd47f59-zs4f4", "timestamp":"2024-06-25 14:16:24.97933236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:24.991 [INFO][4926] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:24.991 [INFO][4926] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:24.991 [INFO][4926] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:24.992 [INFO][4926] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" host="localhost" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:24.997 [INFO][4926] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:25.008 [INFO][4926] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:25.011 [INFO][4926] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:25.013 [INFO][4926] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:25.013 [INFO][4926] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" host="localhost" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:25.016 [INFO][4926] ipam.go 1685: Creating new handle: k8s-pod-network.115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245 Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:25.020 [INFO][4926] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" host="localhost" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:25.026 [INFO][4926] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" host="localhost" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:25.026 [INFO][4926] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" host="localhost" Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:25.026 [INFO][4926] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:16:25.057763 containerd[1350]: 2024-06-25 14:16:25.026 [INFO][4926] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" HandleID="k8s-pod-network.115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" Workload="localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0" Jun 25 14:16:25.058363 containerd[1350]: 2024-06-25 14:16:25.029 [INFO][4913] k8s.go 386: Populated endpoint ContainerID="115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" Namespace="calico-apiserver" Pod="calico-apiserver-868fd47f59-zs4f4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0", GenerateName:"calico-apiserver-868fd47f59-", Namespace:"calico-apiserver", SelfLink:"", UID:"4704064d-a4d4-4bda-b0ca-ec6d2988d446", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868fd47f59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-868fd47f59-zs4f4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99fc84fe020", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:25.058363 containerd[1350]: 2024-06-25 14:16:25.029 [INFO][4913] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" Namespace="calico-apiserver" Pod="calico-apiserver-868fd47f59-zs4f4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0" Jun 25 14:16:25.058363 containerd[1350]: 2024-06-25 14:16:25.030 [INFO][4913] dataplane_linux.go 68: Setting the host side veth name to cali99fc84fe020 ContainerID="115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" Namespace="calico-apiserver" Pod="calico-apiserver-868fd47f59-zs4f4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0" Jun 25 14:16:25.058363 containerd[1350]: 2024-06-25 14:16:25.038 [INFO][4913] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" Namespace="calico-apiserver" Pod="calico-apiserver-868fd47f59-zs4f4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0" Jun 25 14:16:25.058363 containerd[1350]: 2024-06-25 14:16:25.040 [INFO][4913] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" Namespace="calico-apiserver" Pod="calico-apiserver-868fd47f59-zs4f4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0", GenerateName:"calico-apiserver-868fd47f59-", Namespace:"calico-apiserver", SelfLink:"", UID:"4704064d-a4d4-4bda-b0ca-ec6d2988d446", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868fd47f59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245", Pod:"calico-apiserver-868fd47f59-zs4f4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99fc84fe020", MAC:"fa:78:63:df:bf:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:16:25.058363 containerd[1350]: 2024-06-25 14:16:25.047 [INFO][4913] k8s.go 500: Wrote updated endpoint to datastore ContainerID="115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245" Namespace="calico-apiserver" Pod="calico-apiserver-868fd47f59-zs4f4" WorkloadEndpoint="localhost-k8s-calico--apiserver--868fd47f59--zs4f4-eth0" Jun 25 14:16:25.077000 audit[4959]: NETFILTER_CFG table=filter:119 family=2 entries=61 op=nft_register_chain pid=4959 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:16:25.077000 audit[4959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30316 a0=3 a1=ffffe4aa5ca0 a2=0 a3=ffffb15a3fa8 items=0 ppid=3462 pid=4959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:25.077000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:16:25.080456 containerd[1350]: time="2024-06-25T14:16:25.080354408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:16:25.080456 containerd[1350]: time="2024-06-25T14:16:25.080405528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:16:25.080456 containerd[1350]: time="2024-06-25T14:16:25.080421528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:16:25.080456 containerd[1350]: time="2024-06-25T14:16:25.080431648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:16:25.108037 systemd-resolved[1265]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:16:25.125727 containerd[1350]: time="2024-06-25T14:16:25.125688097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868fd47f59-zs4f4,Uid:4704064d-a4d4-4bda-b0ca-ec6d2988d446,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245\"" Jun 25 14:16:25.128954 containerd[1350]: time="2024-06-25T14:16:25.128845808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 14:16:26.243999 systemd-networkd[1138]: cali99fc84fe020: Gained IPv6LL Jun 25 14:16:26.524318 containerd[1350]: time="2024-06-25T14:16:26.524214058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.524836 containerd[1350]: time="2024-06-25T14:16:26.524789143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 14:16:26.525535 containerd[1350]: time="2024-06-25T14:16:26.525502630Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.527207 containerd[1350]: time="2024-06-25T14:16:26.527145126Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.528820 containerd[1350]: time="2024-06-25T14:16:26.528789622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.529717 containerd[1350]: time="2024-06-25T14:16:26.529683630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 1.400788221s" Jun 25 14:16:26.529831 containerd[1350]: time="2024-06-25T14:16:26.529811072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 14:16:26.533715 containerd[1350]: time="2024-06-25T14:16:26.533662629Z" level=info msg="CreateContainer within sandbox \"115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 14:16:26.545434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1005429945.mount: Deactivated successfully. Jun 25 14:16:26.578177 containerd[1350]: time="2024-06-25T14:16:26.578126617Z" level=info msg="CreateContainer within sandbox \"115d2b8e4ec445944e2e6bc406224434f4f8c75ba294209b6c97a12f45230245\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a869217bb290732d17e9d6f851f57631ed4752d8b0cf2e166eb131efbaef0e8a\"" Jun 25 14:16:26.578941 containerd[1350]: time="2024-06-25T14:16:26.578887505Z" level=info msg="StartContainer for \"a869217bb290732d17e9d6f851f57631ed4752d8b0cf2e166eb131efbaef0e8a\"" Jun 25 14:16:26.732364 containerd[1350]: time="2024-06-25T14:16:26.732316104Z" level=info msg="StartContainer for \"a869217bb290732d17e9d6f851f57631ed4752d8b0cf2e166eb131efbaef0e8a\" returns successfully" Jun 25 14:16:26.752871 systemd[1]: run-containerd-runc-k8s.io-a869217bb290732d17e9d6f851f57631ed4752d8b0cf2e166eb131efbaef0e8a-runc.lvBFsX.mount: Deactivated successfully. Jun 25 14:16:27.065912 kubelet[2386]: I0625 14:16:27.065866 2386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-868fd47f59-zs4f4" podStartSLOduration=2.662689818 podCreationTimestamp="2024-06-25 14:16:23 +0000 UTC" firstStartedPulling="2024-06-25 14:16:25.12702923 +0000 UTC m=+69.408343256" lastFinishedPulling="2024-06-25 14:16:26.530166715 +0000 UTC m=+70.811480741" observedRunningTime="2024-06-25 14:16:27.0654793 +0000 UTC m=+71.346793326" watchObservedRunningTime="2024-06-25 14:16:27.065827303 +0000 UTC m=+71.347141329" Jun 25 14:16:27.077000 audit[5041]: NETFILTER_CFG table=filter:120 family=2 entries=10 op=nft_register_rule pid=5041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:27.077000 audit[5041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff197f940 a2=0 a3=1 items=0 ppid=2550 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:27.077000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:27.079000 audit[5041]: NETFILTER_CFG table=nat:121 family=2 entries=44 op=nft_register_rule pid=5041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:27.079000 audit[5041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14988 a0=3 a1=fffff197f940 a2=0 a3=1 items=0 ppid=2550 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:27.079000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:27.619000 audit[5049]: NETFILTER_CFG table=filter:122 family=2 entries=10 op=nft_register_rule pid=5049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:27.619000 audit[5049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffce080dc0 a2=0 a3=1 items=0 ppid=2550 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:27.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:27.621000 audit[5049]: NETFILTER_CFG table=nat:123 family=2 entries=44 op=nft_register_rule pid=5049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:27.621000 audit[5049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14988 a0=3 a1=ffffce080dc0 a2=0 a3=1 items=0 ppid=2550 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:27.621000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:28.075302 systemd[1]: Started sshd@19-10.0.0.23:22-10.0.0.1:52590.service - OpenSSH per-connection server daemon (10.0.0.1:52590). Jun 25 14:16:28.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.23:22-10.0.0.1:52590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:28.076154 kernel: kauditd_printk_skb: 28 callbacks suppressed Jun 25 14:16:28.076211 kernel: audit: type=1130 audit(1719324988.074:437): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.23:22-10.0.0.1:52590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:28.117000 audit[5050]: USER_ACCT pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.118188 sshd[5050]: Accepted publickey for core from 10.0.0.1 port 52590 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:16:28.120091 sshd[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:28.118000 audit[5050]: CRED_ACQ pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.122796 kernel: audit: type=1101 audit(1719324988.117:438): pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.122847 kernel: audit: type=1103 audit(1719324988.118:439): pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.122874 kernel: audit: type=1006 audit(1719324988.118:440): pid=5050 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jun 25 14:16:28.118000 audit[5050]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc07a74e0 a2=3 a3=1 items=0 ppid=1 pid=5050 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:28.129857 kernel: audit: type=1300 audit(1719324988.118:440): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc07a74e0 a2=3 a3=1 items=0 ppid=1 pid=5050 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:28.129948 kernel: audit: type=1327 audit(1719324988.118:440): proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:28.118000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:28.129972 systemd-logind[1335]: New session 20 of user core. Jun 25 14:16:28.134523 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 14:16:28.139000 audit[5050]: USER_START pid=5050 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.142000 audit[5053]: CRED_ACQ pid=5053 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.148131 kernel: audit: type=1105 audit(1719324988.139:441): pid=5050 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.148193 kernel: audit: type=1103 audit(1719324988.142:442): pid=5053 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.355443 sshd[5050]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:28.355000 audit[5050]: USER_END pid=5050 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.356000 audit[5050]: CRED_DISP pid=5050 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.360343 systemd[1]: sshd@19-10.0.0.23:22-10.0.0.1:52590.service: Deactivated successfully. Jun 25 14:16:28.361500 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 14:16:28.361521 systemd-logind[1335]: Session 20 logged out. Waiting for processes to exit. Jun 25 14:16:28.365864 kernel: audit: type=1106 audit(1719324988.355:443): pid=5050 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.365932 kernel: audit: type=1104 audit(1719324988.356:444): pid=5050 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:16:28.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.23:22-10.0.0.1:52590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:28.366296 systemd-logind[1335]: Removed session 20. Jun 25 14:16:28.619000 audit[5070]: NETFILTER_CFG table=filter:124 family=2 entries=9 op=nft_register_rule pid=5070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:28.619000 audit[5070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd2f9b630 a2=0 a3=1 items=0 ppid=2550 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:28.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:16:28.621000 audit[5070]: NETFILTER_CFG table=nat:125 family=2 entries=51 op=nft_register_chain pid=5070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:16:28.621000 audit[5070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18564 a0=3 a1=ffffd2f9b630 a2=0 a3=1 items=0 ppid=2550 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:28.621000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273