Jun 25 14:26:22.864443 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 14:26:22.864462 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT Tue Jun 25 13:19:44 -00 2024 Jun 25 14:26:22.864470 kernel: efi: EFI v2.70 by EDK II Jun 25 14:26:22.864475 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9210018 MEMRESERVE=0xd9523d18 Jun 25 14:26:22.864480 kernel: random: crng init done Jun 25 14:26:22.864486 kernel: ACPI: Early table checksum verification disabled Jun 25 14:26:22.864492 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jun 25 14:26:22.864499 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jun 25 14:26:22.864504 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:26:22.864510 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:26:22.864515 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:26:22.864520 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:26:22.864526 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:26:22.864531 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:26:22.864539 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:26:22.864545 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:26:22.864550 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:26:22.864556 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jun 25 14:26:22.864562 kernel: NUMA: Failed to initialise from firmware Jun 25 14:26:22.864568 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 14:26:22.864573 kernel: NUMA: NODE_DATA [mem 0xdcb07800-0xdcb0cfff] Jun 25 14:26:22.864579 kernel: Zone ranges: Jun 25 14:26:22.864585 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 14:26:22.864592 kernel: DMA32 empty Jun 25 14:26:22.864597 kernel: Normal empty Jun 25 14:26:22.864603 kernel: Movable zone start for each node Jun 25 14:26:22.864609 kernel: Early memory node ranges Jun 25 14:26:22.864614 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jun 25 14:26:22.864620 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jun 25 14:26:22.864626 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jun 25 14:26:22.864631 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jun 25 14:26:22.864637 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jun 25 14:26:22.864642 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jun 25 14:26:22.864648 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jun 25 14:26:22.864654 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 14:26:22.864661 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jun 25 14:26:22.864667 kernel: psci: probing for conduit method from ACPI. Jun 25 14:26:22.864672 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 14:26:22.864678 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 14:26:22.864684 kernel: psci: Trusted OS migration not required Jun 25 14:26:22.864692 kernel: psci: SMC Calling Convention v1.1 Jun 25 14:26:22.864698 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 25 14:26:22.864705 kernel: percpu: Embedded 30 pages/cpu s83880 r8192 d30808 u122880 Jun 25 14:26:22.864711 kernel: pcpu-alloc: s83880 r8192 d30808 u122880 alloc=30*4096 Jun 25 14:26:22.864718 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jun 25 14:26:22.864724 kernel: Detected PIPT I-cache on CPU0 Jun 25 14:26:22.864730 kernel: CPU features: detected: GIC system register CPU interface Jun 25 14:26:22.864736 kernel: CPU features: detected: Hardware dirty bit management Jun 25 14:26:22.864742 kernel: CPU features: detected: Spectre-v4 Jun 25 14:26:22.864747 kernel: CPU features: detected: Spectre-BHB Jun 25 14:26:22.864754 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 14:26:22.864761 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 14:26:22.864767 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 14:26:22.864773 kernel: alternatives: applying boot alternatives Jun 25 14:26:22.864779 kernel: Fallback order for Node 0: 0 Jun 25 14:26:22.864785 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jun 25 14:26:22.864791 kernel: Policy zone: DMA Jun 25 14:26:22.864798 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:26:22.864805 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 14:26:22.864811 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 14:26:22.864817 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 14:26:22.864823 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 14:26:22.864831 kernel: Memory: 2458544K/2572288K available (9984K kernel code, 2108K rwdata, 7720K rodata, 34688K init, 894K bss, 113744K reserved, 0K cma-reserved) Jun 25 14:26:22.864837 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 14:26:22.864843 kernel: trace event string verifier disabled Jun 25 14:26:22.864849 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 14:26:22.864856 kernel: rcu: RCU event tracing is enabled. Jun 25 14:26:22.864862 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 14:26:22.864868 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 14:26:22.864874 kernel: Tracing variant of Tasks RCU enabled. Jun 25 14:26:22.864880 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 14:26:22.864886 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 14:26:22.864892 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 14:26:22.864898 kernel: GICv3: 256 SPIs implemented Jun 25 14:26:22.864905 kernel: GICv3: 0 Extended SPIs implemented Jun 25 14:26:22.864911 kernel: Root IRQ handler: gic_handle_irq Jun 25 14:26:22.864917 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 14:26:22.864923 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 25 14:26:22.864929 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 25 14:26:22.864936 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jun 25 14:26:22.864942 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jun 25 14:26:22.864948 kernel: GICv3: using LPI property table @0x00000000400e0000 Jun 25 14:26:22.864954 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400f0000 Jun 25 14:26:22.864960 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 14:26:22.864966 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:26:22.864973 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 14:26:22.864979 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 14:26:22.864986 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 14:26:22.864992 kernel: arm-pv: using stolen time PV Jun 25 14:26:22.864998 kernel: Console: colour dummy device 80x25 Jun 25 14:26:22.865004 kernel: ACPI: Core revision 20220331 Jun 25 14:26:22.865011 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 14:26:22.865017 kernel: pid_max: default: 32768 minimum: 301 Jun 25 14:26:22.865023 kernel: LSM: Security Framework initializing Jun 25 14:26:22.865029 kernel: SELinux: Initializing. Jun 25 14:26:22.865036 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:26:22.865043 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:26:22.865049 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:26:22.865055 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 14:26:22.865061 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:26:22.865067 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 14:26:22.865073 kernel: rcu: Hierarchical SRCU implementation. Jun 25 14:26:22.865080 kernel: rcu: Max phase no-delay instances is 400. Jun 25 14:26:22.865086 kernel: Platform MSI: ITS@0x8080000 domain created Jun 25 14:26:22.865093 kernel: PCI/MSI: ITS@0x8080000 domain created Jun 25 14:26:22.865099 kernel: Remapping and enabling EFI services. Jun 25 14:26:22.865106 kernel: smp: Bringing up secondary CPUs ... Jun 25 14:26:22.865112 kernel: Detected PIPT I-cache on CPU1 Jun 25 14:26:22.865118 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 25 14:26:22.865124 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040100000 Jun 25 14:26:22.865130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:26:22.865137 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 14:26:22.865143 kernel: Detected PIPT I-cache on CPU2 Jun 25 14:26:22.865149 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jun 25 14:26:22.865157 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040110000 Jun 25 14:26:22.865163 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:26:22.865169 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jun 25 14:26:22.865175 kernel: Detected PIPT I-cache on CPU3 Jun 25 14:26:22.865186 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jun 25 14:26:22.865194 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040120000 Jun 25 14:26:22.865257 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:26:22.865265 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jun 25 14:26:22.865272 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 14:26:22.865278 kernel: SMP: Total of 4 processors activated. Jun 25 14:26:22.865285 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 14:26:22.865300 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 14:26:22.865307 kernel: CPU features: detected: Common not Private translations Jun 25 14:26:22.865313 kernel: CPU features: detected: CRC32 instructions Jun 25 14:26:22.865320 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 14:26:22.865326 kernel: CPU features: detected: LSE atomic instructions Jun 25 14:26:22.865333 kernel: CPU features: detected: Privileged Access Never Jun 25 14:26:22.865341 kernel: CPU features: detected: RAS Extension Support Jun 25 14:26:22.865347 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 25 14:26:22.865354 kernel: CPU: All CPU(s) started at EL1 Jun 25 14:26:22.865360 kernel: alternatives: applying system-wide alternatives Jun 25 14:26:22.865367 kernel: devtmpfs: initialized Jun 25 14:26:22.865374 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 14:26:22.865380 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 14:26:22.865387 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 14:26:22.865393 kernel: SMBIOS 3.0.0 present. Jun 25 14:26:22.865408 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jun 25 14:26:22.865415 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 14:26:22.865422 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 14:26:22.865428 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 14:26:22.865435 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 14:26:22.865442 kernel: audit: initializing netlink subsys (disabled) Jun 25 14:26:22.865448 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jun 25 14:26:22.865455 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 14:26:22.865461 kernel: cpuidle: using governor menu Jun 25 14:26:22.865469 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 14:26:22.865476 kernel: ASID allocator initialised with 32768 entries Jun 25 14:26:22.865482 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 14:26:22.865489 kernel: Serial: AMBA PL011 UART driver Jun 25 14:26:22.865495 kernel: KASLR enabled Jun 25 14:26:22.865502 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 14:26:22.865508 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 14:26:22.865515 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 14:26:22.865521 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 14:26:22.865529 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 14:26:22.865535 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 14:26:22.865542 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 14:26:22.865548 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 14:26:22.865555 kernel: ACPI: Added _OSI(Module Device) Jun 25 14:26:22.865561 kernel: ACPI: Added _OSI(Processor Device) Jun 25 14:26:22.865568 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 14:26:22.865574 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 14:26:22.865581 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 14:26:22.865588 kernel: ACPI: Interpreter enabled Jun 25 14:26:22.865595 kernel: ACPI: Using GIC for interrupt routing Jun 25 14:26:22.865601 kernel: ACPI: MCFG table detected, 1 entries Jun 25 14:26:22.865608 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 25 14:26:22.865614 kernel: printk: console [ttyAMA0] enabled Jun 25 14:26:22.865620 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 14:26:22.865737 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 14:26:22.865801 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 25 14:26:22.865861 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 25 14:26:22.865918 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 25 14:26:22.865976 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 25 14:26:22.865985 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 25 14:26:22.865991 kernel: PCI host bridge to bus 0000:00 Jun 25 14:26:22.866055 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 25 14:26:22.866108 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 25 14:26:22.866162 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 25 14:26:22.866226 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 14:26:22.866311 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jun 25 14:26:22.866381 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 14:26:22.866441 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jun 25 14:26:22.866498 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jun 25 14:26:22.866555 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 14:26:22.866615 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 14:26:22.866673 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jun 25 14:26:22.866730 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jun 25 14:26:22.866782 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 25 14:26:22.866835 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 25 14:26:22.866887 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 25 14:26:22.866895 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 25 14:26:22.866904 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 25 14:26:22.866910 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 25 14:26:22.866917 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 25 14:26:22.866923 kernel: iommu: Default domain type: Translated Jun 25 14:26:22.866930 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 14:26:22.866936 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 14:26:22.866943 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 14:26:22.866949 kernel: PTP clock support registered Jun 25 14:26:22.866955 kernel: Registered efivars operations Jun 25 14:26:22.866963 kernel: vgaarb: loaded Jun 25 14:26:22.866969 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 14:26:22.866976 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 14:26:22.866982 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 14:26:22.866988 kernel: pnp: PnP ACPI init Jun 25 14:26:22.867050 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 25 14:26:22.867060 kernel: pnp: PnP ACPI: found 1 devices Jun 25 14:26:22.867067 kernel: NET: Registered PF_INET protocol family Jun 25 14:26:22.867075 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 14:26:22.867082 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 14:26:22.867088 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 14:26:22.867095 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 14:26:22.867102 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 14:26:22.867108 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 14:26:22.867114 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:26:22.867121 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:26:22.867127 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 14:26:22.867135 kernel: PCI: CLS 0 bytes, default 64 Jun 25 14:26:22.867142 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jun 25 14:26:22.867148 kernel: kvm [1]: HYP mode not available Jun 25 14:26:22.867154 kernel: Initialise system trusted keyrings Jun 25 14:26:22.867161 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 14:26:22.867167 kernel: Key type asymmetric registered Jun 25 14:26:22.867173 kernel: Asymmetric key parser 'x509' registered Jun 25 14:26:22.867180 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 14:26:22.867186 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 14:26:22.867194 kernel: io scheduler mq-deadline registered Jun 25 14:26:22.867208 kernel: io scheduler kyber registered Jun 25 14:26:22.867215 kernel: io scheduler bfq registered Jun 25 14:26:22.867222 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 25 14:26:22.867228 kernel: ACPI: button: Power Button [PWRB] Jun 25 14:26:22.867235 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 25 14:26:22.867300 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jun 25 14:26:22.867309 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 14:26:22.867316 kernel: thunder_xcv, ver 1.0 Jun 25 14:26:22.867324 kernel: thunder_bgx, ver 1.0 Jun 25 14:26:22.867331 kernel: nicpf, ver 1.0 Jun 25 14:26:22.867337 kernel: nicvf, ver 1.0 Jun 25 14:26:22.867405 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 14:26:22.867461 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T14:26:22 UTC (1719325582) Jun 25 14:26:22.867470 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 14:26:22.867477 kernel: NET: Registered PF_INET6 protocol family Jun 25 14:26:22.867484 kernel: Segment Routing with IPv6 Jun 25 14:26:22.867492 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 14:26:22.867499 kernel: NET: Registered PF_PACKET protocol family Jun 25 14:26:22.867505 kernel: Key type dns_resolver registered Jun 25 14:26:22.867511 kernel: registered taskstats version 1 Jun 25 14:26:22.867518 kernel: Loading compiled-in X.509 certificates Jun 25 14:26:22.867525 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: 0fa2e892f90caac26ef50b6d7e7f5c106b0c7e83' Jun 25 14:26:22.867531 kernel: Key type .fscrypt registered Jun 25 14:26:22.867538 kernel: Key type fscrypt-provisioning registered Jun 25 14:26:22.867544 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 14:26:22.867552 kernel: ima: Allocated hash algorithm: sha1 Jun 25 14:26:22.867558 kernel: ima: No architecture policies found Jun 25 14:26:22.867565 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 14:26:22.867571 kernel: clk: Disabling unused clocks Jun 25 14:26:22.867578 kernel: Freeing unused kernel memory: 34688K Jun 25 14:26:22.867584 kernel: Run /init as init process Jun 25 14:26:22.867590 kernel: with arguments: Jun 25 14:26:22.867597 kernel: /init Jun 25 14:26:22.867603 kernel: with environment: Jun 25 14:26:22.867610 kernel: HOME=/ Jun 25 14:26:22.867617 kernel: TERM=linux Jun 25 14:26:22.867623 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 14:26:22.867631 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:26:22.867640 systemd[1]: Detected virtualization kvm. Jun 25 14:26:22.867647 systemd[1]: Detected architecture arm64. Jun 25 14:26:22.867653 systemd[1]: Running in initrd. Jun 25 14:26:22.867660 systemd[1]: No hostname configured, using default hostname. Jun 25 14:26:22.867668 systemd[1]: Hostname set to . Jun 25 14:26:22.867676 systemd[1]: Initializing machine ID from VM UUID. Jun 25 14:26:22.867683 systemd[1]: Queued start job for default target initrd.target. Jun 25 14:26:22.867690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:26:22.867697 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:26:22.867704 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:26:22.867711 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:26:22.867717 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:26:22.867726 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:26:22.867733 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:26:22.867741 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:26:22.867748 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 14:26:22.867755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 14:26:22.867762 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 14:26:22.867769 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:26:22.867777 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:26:22.867784 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:26:22.867791 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:26:22.867798 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:26:22.867805 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 14:26:22.867812 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 14:26:22.867819 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:26:22.867826 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:26:22.867833 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 14:26:22.867841 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:26:22.867849 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 14:26:22.867856 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:26:22.867863 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:26:22.867870 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 14:26:22.867877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:26:22.867887 systemd-journald[225]: Journal started Jun 25 14:26:22.867923 systemd-journald[225]: Runtime Journal (/run/log/journal/cb676cbc19cc4f87ad376a8fff63b7c3) is 6.0M, max 48.6M, 42.6M free. Jun 25 14:26:22.858850 systemd-modules-load[226]: Inserted module 'overlay' Jun 25 14:26:22.870242 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:26:22.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.873222 kernel: audit: type=1130 audit(1719325582.869:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.875224 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 14:26:22.876413 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:26:22.879677 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:26:22.890633 kernel: Bridge firewalling registered Jun 25 14:26:22.890651 kernel: audit: type=1130 audit(1719325582.880:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.890660 kernel: audit: type=1130 audit(1719325582.885:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.890669 kernel: audit: type=1334 audit(1719325582.888:5): prog-id=6 op=LOAD Jun 25 14:26:22.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.888000 audit: BPF prog-id=6 op=LOAD Jun 25 14:26:22.879816 systemd-modules-load[226]: Inserted module 'br_netfilter' Jun 25 14:26:22.892778 kernel: SCSI subsystem initialized Jun 25 14:26:22.881264 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 14:26:22.883947 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:26:22.890500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:26:22.896389 dracut-cmdline[248]: dracut-dracut-053 Jun 25 14:26:22.898752 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:26:22.902597 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 14:26:22.902615 kernel: device-mapper: uevent: version 1.0.3 Jun 25 14:26:22.902624 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 14:26:22.907743 systemd-modules-load[226]: Inserted module 'dm_multipath' Jun 25 14:26:22.908979 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:26:22.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.913244 kernel: audit: type=1130 audit(1719325582.909:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.916394 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:26:22.923849 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:26:22.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.927050 systemd-resolved[253]: Positive Trust Anchors: Jun 25 14:26:22.928630 kernel: audit: type=1130 audit(1719325582.925:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.927057 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:26:22.927084 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:26:22.931304 systemd-resolved[253]: Defaulting to hostname 'linux'. Jun 25 14:26:22.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.932050 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:26:22.938331 kernel: audit: type=1130 audit(1719325582.933:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:22.936784 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:26:22.980264 kernel: Loading iSCSI transport class v2.0-870. Jun 25 14:26:22.989222 kernel: iscsi: registered transport (tcp) Jun 25 14:26:23.006231 kernel: iscsi: registered transport (qla4xxx) Jun 25 14:26:23.006266 kernel: QLogic iSCSI HBA Driver Jun 25 14:26:23.052505 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 14:26:23.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:23.057298 kernel: audit: type=1130 audit(1719325583.053:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:23.061415 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 14:26:23.119580 kernel: raid6: neonx8 gen() 15666 MB/s Jun 25 14:26:23.136258 kernel: raid6: neonx4 gen() 15651 MB/s Jun 25 14:26:23.153227 kernel: raid6: neonx2 gen() 13252 MB/s Jun 25 14:26:23.170227 kernel: raid6: neonx1 gen() 10423 MB/s Jun 25 14:26:23.187413 kernel: raid6: int64x8 gen() 6971 MB/s Jun 25 14:26:23.204218 kernel: raid6: int64x4 gen() 7330 MB/s Jun 25 14:26:23.221229 kernel: raid6: int64x2 gen() 6124 MB/s Jun 25 14:26:23.238251 kernel: raid6: int64x1 gen() 5047 MB/s Jun 25 14:26:23.238282 kernel: raid6: using algorithm neonx8 gen() 15666 MB/s Jun 25 14:26:23.255253 kernel: raid6: .... xor() 11851 MB/s, rmw enabled Jun 25 14:26:23.255283 kernel: raid6: using neon recovery algorithm Jun 25 14:26:23.262415 kernel: xor: measuring software checksum speed Jun 25 14:26:23.262442 kernel: 8regs : 19883 MB/sec Jun 25 14:26:23.263267 kernel: 32regs : 19697 MB/sec Jun 25 14:26:23.264457 kernel: arm64_neon : 27080 MB/sec Jun 25 14:26:23.264475 kernel: xor: using function: arm64_neon (27080 MB/sec) Jun 25 14:26:23.319244 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jun 25 14:26:23.329337 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:26:23.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:23.337324 kernel: audit: type=1130 audit(1719325583.329:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:23.332000 audit: BPF prog-id=7 op=LOAD Jun 25 14:26:23.337000 audit: BPF prog-id=8 op=LOAD Jun 25 14:26:23.351509 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:26:23.363586 systemd-udevd[429]: Using default interface naming scheme 'v252'. Jun 25 14:26:23.366833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:26:23.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:23.370083 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 14:26:23.385816 dracut-pre-trigger[437]: rd.md=0: removing MD RAID activation Jun 25 14:26:23.413567 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:26:23.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:23.424359 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:26:23.458770 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:26:23.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:23.499746 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jun 25 14:26:23.508378 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 14:26:23.508471 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 14:26:23.508481 kernel: GPT:9289727 != 19775487 Jun 25 14:26:23.508496 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 14:26:23.508504 kernel: GPT:9289727 != 19775487 Jun 25 14:26:23.508512 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 14:26:23.508520 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 14:26:23.524394 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 14:26:23.526657 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (488) Jun 25 14:26:23.529856 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 14:26:23.531712 kernel: BTRFS: device fsid 4f04fb4d-edd3-40b1-b587-481b761003a7 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (492) Jun 25 14:26:23.536063 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 14:26:23.538905 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 14:26:23.539682 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 14:26:23.552553 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 14:26:23.557674 disk-uuid[505]: Primary Header is updated. Jun 25 14:26:23.557674 disk-uuid[505]: Secondary Entries is updated. Jun 25 14:26:23.557674 disk-uuid[505]: Secondary Header is updated. Jun 25 14:26:23.562228 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 14:26:24.573947 disk-uuid[506]: The operation has completed successfully. Jun 25 14:26:24.577840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 14:26:24.619189 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 14:26:24.620665 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 14:26:24.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:24.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:24.639155 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 14:26:24.643988 sh[519]: Success Jun 25 14:26:24.665222 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 14:26:24.709096 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 14:26:24.725130 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 14:26:24.727299 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 14:26:24.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:24.733848 kernel: BTRFS info (device dm-0): first mount of filesystem 4f04fb4d-edd3-40b1-b587-481b761003a7 Jun 25 14:26:24.733878 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:26:24.733888 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 14:26:24.738556 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 14:26:24.738642 kernel: BTRFS info (device dm-0): using free space tree Jun 25 14:26:24.741702 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 14:26:24.742733 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 14:26:24.759609 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 14:26:24.761182 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 14:26:24.777764 kernel: BTRFS info (device vda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:26:24.777815 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:26:24.777825 kernel: BTRFS info (device vda6): using free space tree Jun 25 14:26:24.797367 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 14:26:24.799259 kernel: BTRFS info (device vda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:26:24.811439 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 14:26:24.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:24.819445 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 14:26:24.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:24.894000 audit: BPF prog-id=9 op=LOAD Jun 25 14:26:24.893537 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:26:24.905632 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:26:24.930101 ignition[636]: Ignition 2.15.0 Jun 25 14:26:24.930114 ignition[636]: Stage: fetch-offline Jun 25 14:26:24.930601 systemd-networkd[708]: lo: Link UP Jun 25 14:26:24.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:24.930176 ignition[636]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:26:24.930605 systemd-networkd[708]: lo: Gained carrier Jun 25 14:26:24.930186 ignition[636]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:26:24.930955 systemd-networkd[708]: Enumeration completed Jun 25 14:26:24.930338 ignition[636]: parsed url from cmdline: "" Jun 25 14:26:24.931054 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:26:24.930342 ignition[636]: no config URL provided Jun 25 14:26:24.931139 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:26:24.930346 ignition[636]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 14:26:24.931142 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:26:24.930355 ignition[636]: no config at "/usr/lib/ignition/user.ign" Jun 25 14:26:24.932275 systemd-networkd[708]: eth0: Link UP Jun 25 14:26:24.930386 ignition[636]: op(1): [started] loading QEMU firmware config module Jun 25 14:26:24.932279 systemd-networkd[708]: eth0: Gained carrier Jun 25 14:26:24.930391 ignition[636]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 14:26:24.932296 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:26:24.933083 systemd[1]: Reached target network.target - Network. Jun 25 14:26:24.946424 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:26:24.949684 ignition[636]: op(1): [finished] loading QEMU firmware config module Jun 25 14:26:24.956318 systemd-networkd[708]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 14:26:24.958699 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:26:24.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:24.960547 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 14:26:24.963582 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:26:24.963582 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 14:26:24.963582 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 14:26:24.963582 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 14:26:24.963582 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:26:24.963582 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 14:26:24.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:24.966908 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 14:26:24.971234 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 14:26:24.986339 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 14:26:24.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:24.987423 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:26:24.989034 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:26:24.991009 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:26:25.006466 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 14:26:25.008433 ignition[636]: parsing config with SHA512: 479a3f3c03cb02f1ad1222ed7baf937ff2846b315f3277db2cd7590ea1c18322628c3e756659860baf7d0e192f62aaa426de9e4590b5489bd0b379ccc5841d60 Jun 25 14:26:25.013185 unknown[636]: fetched base config from "system" Jun 25 14:26:25.013220 unknown[636]: fetched user config from "qemu" Jun 25 14:26:25.013812 ignition[636]: fetch-offline: fetch-offline passed Jun 25 14:26:25.013901 ignition[636]: Ignition finished successfully Jun 25 14:26:25.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:25.014951 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:26:25.016022 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 14:26:25.016904 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 14:26:25.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:25.018689 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:26:25.033483 ignition[729]: Ignition 2.15.0 Jun 25 14:26:25.033494 ignition[729]: Stage: kargs Jun 25 14:26:25.033596 ignition[729]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:26:25.033606 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:26:25.034576 ignition[729]: kargs: kargs passed Jun 25 14:26:25.034625 ignition[729]: Ignition finished successfully Jun 25 14:26:25.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:25.037185 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 14:26:25.046416 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 14:26:25.056524 ignition[737]: Ignition 2.15.0 Jun 25 14:26:25.056534 ignition[737]: Stage: disks Jun 25 14:26:25.056630 ignition[737]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:26:25.056639 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:26:25.059247 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 14:26:25.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:25.057596 ignition[737]: disks: disks passed Jun 25 14:26:25.060676 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 14:26:25.057643 ignition[737]: Ignition finished successfully Jun 25 14:26:25.062028 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:26:25.063115 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:26:25.064550 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:26:25.065717 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:26:25.076439 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 14:26:25.095762 systemd-fsck[747]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 14:26:25.126362 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 14:26:25.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:25.138374 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 14:26:25.197235 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 14:26:25.198035 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 14:26:25.198905 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 14:26:25.215342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:26:25.217995 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 14:26:25.220147 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 14:26:25.222213 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (753) Jun 25 14:26:25.222240 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 14:26:25.222304 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:26:25.227378 kernel: BTRFS info (device vda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:26:25.227415 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:26:25.227432 kernel: BTRFS info (device vda6): using free space tree Jun 25 14:26:25.227831 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 14:26:25.229421 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 14:26:25.235565 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:26:25.278084 initrd-setup-root[777]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 14:26:25.285333 initrd-setup-root[784]: cut: /sysroot/etc/group: No such file or directory Jun 25 14:26:25.289469 initrd-setup-root[791]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 14:26:25.293040 initrd-setup-root[798]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 14:26:25.389904 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 14:26:25.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:25.398374 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 14:26:25.399928 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 14:26:25.405048 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 14:26:25.407256 kernel: BTRFS info (device vda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:26:25.422292 ignition[864]: INFO : Ignition 2.15.0 Jun 25 14:26:25.422292 ignition[864]: INFO : Stage: mount Jun 25 14:26:25.423564 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:26:25.423564 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:26:25.425228 ignition[864]: INFO : mount: mount passed Jun 25 14:26:25.425228 ignition[864]: INFO : Ignition finished successfully Jun 25 14:26:25.425296 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 14:26:25.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:25.437350 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 14:26:25.438265 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 14:26:25.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:26.207491 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:26:26.216780 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (876) Jun 25 14:26:26.216831 kernel: BTRFS info (device vda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:26:26.216842 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:26:26.217456 kernel: BTRFS info (device vda6): using free space tree Jun 25 14:26:26.220870 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:26:26.244475 ignition[894]: INFO : Ignition 2.15.0 Jun 25 14:26:26.244475 ignition[894]: INFO : Stage: files Jun 25 14:26:26.245696 ignition[894]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:26:26.245696 ignition[894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:26:26.245696 ignition[894]: DEBUG : files: compiled without relabeling support, skipping Jun 25 14:26:26.248084 ignition[894]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 14:26:26.248084 ignition[894]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 14:26:26.251075 ignition[894]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 14:26:26.252095 ignition[894]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 14:26:26.252095 ignition[894]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 14:26:26.251673 unknown[894]: wrote ssh authorized keys file for user: core Jun 25 14:26:26.254816 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:26:26.254816 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 14:26:26.294370 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 14:26:26.348179 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:26:26.350083 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jun 25 14:26:26.660674 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 14:26:26.772332 systemd-networkd[708]: eth0: Gained IPv6LL Jun 25 14:26:26.907347 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:26:26.907347 ignition[894]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 14:26:26.912835 ignition[894]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:26:26.912835 ignition[894]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:26:26.912835 ignition[894]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 14:26:26.912835 ignition[894]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 25 14:26:26.912835 ignition[894]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 14:26:26.912835 ignition[894]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 14:26:26.912835 ignition[894]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 25 14:26:26.912835 ignition[894]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 14:26:26.912835 ignition[894]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 14:26:26.940086 ignition[894]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 14:26:26.943567 ignition[894]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 14:26:26.943567 ignition[894]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 25 14:26:26.943567 ignition[894]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 14:26:26.943567 ignition[894]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:26:26.943567 ignition[894]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:26:26.943567 ignition[894]: INFO : files: files passed Jun 25 14:26:26.943567 ignition[894]: INFO : Ignition finished successfully Jun 25 14:26:26.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:26.943277 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 14:26:26.950395 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 14:26:26.952079 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 14:26:26.955490 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 14:26:26.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:26.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:26.955589 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 14:26:26.960778 initrd-setup-root-after-ignition[920]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 14:26:26.961823 initrd-setup-root-after-ignition[922]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:26:26.961823 initrd-setup-root-after-ignition[922]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:26:26.964322 initrd-setup-root-after-ignition[926]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:26:26.965874 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:26:26.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:26.966851 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 14:26:26.980434 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 14:26:26.993396 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 14:26:26.993512 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 14:26:26.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:26.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:26.995449 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 14:26:26.997089 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 14:26:26.998618 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 14:26:27.000444 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 14:26:27.012617 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:26:27.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.014736 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 14:26:27.023167 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:26:27.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.024116 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:26:27.025148 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 14:26:27.026003 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 14:26:27.026126 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:26:27.027175 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 14:26:27.028601 systemd[1]: Stopped target basic.target - Basic System. Jun 25 14:26:27.029928 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 14:26:27.031150 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:26:27.032766 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 14:26:27.034931 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 14:26:27.036338 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:26:27.037931 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 14:26:27.039427 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 14:26:27.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.040887 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:26:27.042310 systemd[1]: Stopped target swap.target - Swaps. Jun 25 14:26:27.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.043681 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 14:26:27.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.043797 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:26:27.045140 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:26:27.046360 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 14:26:27.046466 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 14:26:27.048197 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 14:26:27.048320 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:26:27.049556 systemd[1]: Stopped target paths.target - Path Units. Jun 25 14:26:27.050634 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 14:26:27.054308 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:26:27.055478 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 14:26:27.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.056673 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 14:26:27.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.058015 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 14:26:27.058123 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:26:27.059816 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 14:26:27.059910 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 14:26:27.076556 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 14:26:27.078191 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 14:26:27.079322 iscsid[715]: iscsid shutting down. Jun 25 14:26:27.079686 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 14:26:27.081934 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 14:26:27.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.082103 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:26:27.083512 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 14:26:27.083612 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:26:27.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.089966 ignition[940]: INFO : Ignition 2.15.0 Jun 25 14:26:27.089966 ignition[940]: INFO : Stage: umount Jun 25 14:26:27.089966 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:26:27.089966 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:26:27.089966 ignition[940]: INFO : umount: umount passed Jun 25 14:26:27.089966 ignition[940]: INFO : Ignition finished successfully Jun 25 14:26:27.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.092883 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 14:26:27.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.093517 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 14:26:27.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.093622 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 14:26:27.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.095830 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 14:26:27.095926 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 14:26:27.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.097734 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 14:26:27.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.097811 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:26:27.098716 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 14:26:27.098758 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 14:26:27.100052 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 14:26:27.100096 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 14:26:27.101508 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 14:26:27.101548 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 14:26:27.103173 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:26:27.105875 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 14:26:27.106004 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:26:27.107749 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 14:26:27.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.107888 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 14:26:27.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.111319 systemd[1]: Stopped target network.target - Network. Jun 25 14:26:27.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.112364 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 14:26:27.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.112406 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:26:27.113744 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 14:26:27.115065 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 14:26:27.121520 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 14:26:27.121612 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 14:26:27.124150 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 14:26:27.124191 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 14:26:27.124244 systemd-networkd[708]: eth0: DHCPv6 lease lost Jun 25 14:26:27.126383 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 14:26:27.126522 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 14:26:27.128956 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 14:26:27.129043 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 14:26:27.131747 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 14:26:27.131780 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:26:27.144000 audit: BPF prog-id=6 op=UNLOAD Jun 25 14:26:27.144000 audit: BPF prog-id=9 op=UNLOAD Jun 25 14:26:27.147615 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 14:26:27.148329 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 14:26:27.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.148400 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:26:27.156494 kernel: kauditd_printk_skb: 53 callbacks suppressed Jun 25 14:26:27.156528 kernel: audit: type=1131 audit(1719325587.151:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.150115 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 14:26:27.161615 kernel: audit: type=1131 audit(1719325587.157:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.150155 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:26:27.166088 kernel: audit: type=1131 audit(1719325587.162:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.153082 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 14:26:27.153128 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 14:26:27.157529 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 14:26:27.157576 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:26:27.166686 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:26:27.171861 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 14:26:27.171943 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 14:26:27.182693 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 14:26:27.182875 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:26:27.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.184505 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 14:26:27.188493 kernel: audit: type=1131 audit(1719325587.183:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.184541 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 14:26:27.187879 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 14:26:27.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.187916 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:26:27.203545 kernel: audit: type=1131 audit(1719325587.194:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.203570 kernel: audit: type=1131 audit(1719325587.197:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.189163 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 14:26:27.189213 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:26:27.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.194608 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 14:26:27.209228 kernel: audit: type=1131 audit(1719325587.205:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.194658 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 14:26:27.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.198126 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 14:26:27.213727 kernel: audit: type=1131 audit(1719325587.209:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.198176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:26:27.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.206324 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 14:26:27.221270 kernel: audit: type=1131 audit(1719325587.214:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.221292 kernel: audit: type=1131 audit(1719325587.217:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.208591 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 14:26:27.208663 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:26:27.212990 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 14:26:27.213032 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:26:27.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.214586 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 14:26:27.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.214621 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:26:27.218621 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 14:26:27.222750 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 14:26:27.222871 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 14:26:27.224758 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 14:26:27.224869 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 14:26:27.226369 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 14:26:27.243475 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 14:26:27.250915 systemd[1]: Switching root. Jun 25 14:26:27.263275 systemd-journald[225]: Journal stopped Jun 25 14:26:27.961637 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jun 25 14:26:27.961695 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 14:26:27.961708 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 14:26:27.961718 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 14:26:27.961731 kernel: SELinux: policy capability open_perms=1 Jun 25 14:26:27.961743 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 14:26:27.961763 kernel: SELinux: policy capability always_check_network=0 Jun 25 14:26:27.961776 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 14:26:27.961785 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 14:26:27.961799 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 14:26:27.961811 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 14:26:27.961822 systemd[1]: Successfully loaded SELinux policy in 49.120ms. Jun 25 14:26:27.961838 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.920ms. Jun 25 14:26:27.961850 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:26:27.961861 systemd[1]: Detected virtualization kvm. Jun 25 14:26:27.961872 systemd[1]: Detected architecture arm64. Jun 25 14:26:27.961884 systemd[1]: Detected first boot. Jun 25 14:26:27.961894 systemd[1]: Initializing machine ID from VM UUID. Jun 25 14:26:27.961904 systemd[1]: Populated /etc with preset unit settings. Jun 25 14:26:27.961915 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 14:26:27.961925 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 14:26:27.961935 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 14:26:27.961946 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 14:26:27.961957 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 14:26:27.961968 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 14:26:27.961980 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 14:26:27.961991 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 14:26:27.962023 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 14:26:27.962033 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 14:26:27.962043 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 14:26:27.962055 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:26:27.962066 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 14:26:27.962076 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 14:26:27.962088 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 14:26:27.962099 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 14:26:27.962109 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 14:26:27.962120 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 14:26:27.962130 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 14:26:27.962144 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:26:27.962155 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:26:27.962166 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:26:27.962178 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:26:27.962188 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 14:26:27.962207 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 14:26:27.962219 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 14:26:27.962230 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:26:27.962241 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:26:27.962254 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:26:27.962271 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 14:26:27.962294 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 14:26:27.962307 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 14:26:27.962318 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 14:26:27.962329 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 14:26:27.962339 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 14:26:27.962351 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 14:26:27.962362 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 14:26:27.962376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:26:27.962386 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:26:27.962397 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 14:26:27.962409 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:26:27.962420 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:26:27.962430 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:26:27.962441 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 14:26:27.962452 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:26:27.962465 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 14:26:27.962476 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 14:26:27.962488 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 14:26:27.962500 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 14:26:27.962510 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 14:26:27.962520 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 14:26:27.962530 kernel: fuse: init (API version 7.37) Jun 25 14:26:27.962541 kernel: loop: module loaded Jun 25 14:26:27.962553 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:26:27.962564 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:26:27.962575 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 14:26:27.962585 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 14:26:27.962597 kernel: ACPI: bus type drm_connector registered Jun 25 14:26:27.962608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:26:27.962619 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 14:26:27.962629 systemd[1]: Stopped verity-setup.service. Jun 25 14:26:27.962640 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 14:26:27.962650 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 14:26:27.962660 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 14:26:27.962673 systemd-journald[1039]: Journal started Jun 25 14:26:27.962715 systemd-journald[1039]: Runtime Journal (/run/log/journal/cb676cbc19cc4f87ad376a8fff63b7c3) is 6.0M, max 48.6M, 42.6M free. Jun 25 14:26:27.349000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 14:26:27.453000 audit: BPF prog-id=10 op=LOAD Jun 25 14:26:27.453000 audit: BPF prog-id=10 op=UNLOAD Jun 25 14:26:27.453000 audit: BPF prog-id=11 op=LOAD Jun 25 14:26:27.453000 audit: BPF prog-id=11 op=UNLOAD Jun 25 14:26:27.817000 audit: BPF prog-id=12 op=LOAD Jun 25 14:26:27.817000 audit: BPF prog-id=3 op=UNLOAD Jun 25 14:26:27.817000 audit: BPF prog-id=13 op=LOAD Jun 25 14:26:27.817000 audit: BPF prog-id=14 op=LOAD Jun 25 14:26:27.817000 audit: BPF prog-id=4 op=UNLOAD Jun 25 14:26:27.817000 audit: BPF prog-id=5 op=UNLOAD Jun 25 14:26:27.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.831000 audit: BPF prog-id=12 op=UNLOAD Jun 25 14:26:27.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.930000 audit: BPF prog-id=15 op=LOAD Jun 25 14:26:27.931000 audit: BPF prog-id=16 op=LOAD Jun 25 14:26:27.931000 audit: BPF prog-id=17 op=LOAD Jun 25 14:26:27.931000 audit: BPF prog-id=13 op=UNLOAD Jun 25 14:26:27.931000 audit: BPF prog-id=14 op=UNLOAD Jun 25 14:26:27.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.960000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:26:27.960000 audit[1039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd79c48c0 a2=4000 a3=1 items=0 ppid=1 pid=1039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:27.960000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 14:26:27.806972 systemd[1]: Queued start job for default target multi-user.target. Jun 25 14:26:27.806983 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 14:26:27.818090 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 14:26:27.964289 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 14:26:27.965227 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:26:27.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.966444 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 14:26:27.967345 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 14:26:27.968311 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:26:27.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.969614 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 14:26:27.969742 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 14:26:27.970834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:26:27.970959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:26:27.972192 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:26:27.972347 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:26:27.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.973545 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 14:26:27.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.974588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:26:27.974720 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:26:27.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.975935 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 14:26:27.976071 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 14:26:27.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.977314 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:26:27.977446 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:26:27.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.978899 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:26:27.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.980005 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 14:26:27.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.981250 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 14:26:27.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:27.982595 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 14:26:27.990589 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 14:26:27.992757 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 14:26:27.993567 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 14:26:27.996539 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 14:26:27.998778 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 14:26:27.999724 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:26:28.001151 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 14:26:28.002079 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:26:28.003346 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:26:28.004800 systemd-journald[1039]: Time spent on flushing to /var/log/journal/cb676cbc19cc4f87ad376a8fff63b7c3 is 21.594ms for 972 entries. Jun 25 14:26:28.004800 systemd-journald[1039]: System Journal (/var/log/journal/cb676cbc19cc4f87ad376a8fff63b7c3) is 8.0M, max 195.6M, 187.6M free. Jun 25 14:26:28.035304 systemd-journald[1039]: Received client request to flush runtime journal. Jun 25 14:26:28.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.005370 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 14:26:28.009313 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:26:28.012651 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 14:26:28.036770 udevadm[1070]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 14:26:28.013653 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 14:26:28.014710 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 14:26:28.015884 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 14:26:28.021417 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 14:26:28.022415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:26:28.036460 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 14:26:28.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.038903 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 14:26:28.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.046485 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:26:28.065068 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:26:28.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.442905 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 14:26:28.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.444000 audit: BPF prog-id=18 op=LOAD Jun 25 14:26:28.444000 audit: BPF prog-id=19 op=LOAD Jun 25 14:26:28.444000 audit: BPF prog-id=7 op=UNLOAD Jun 25 14:26:28.444000 audit: BPF prog-id=8 op=UNLOAD Jun 25 14:26:28.450549 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:26:28.469426 systemd-udevd[1074]: Using default interface naming scheme 'v252'. Jun 25 14:26:28.492793 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:26:28.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.494000 audit: BPF prog-id=20 op=LOAD Jun 25 14:26:28.504939 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:26:28.516000 audit: BPF prog-id=21 op=LOAD Jun 25 14:26:28.516000 audit: BPF prog-id=22 op=LOAD Jun 25 14:26:28.517215 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1075) Jun 25 14:26:28.516000 audit: BPF prog-id=23 op=LOAD Jun 25 14:26:28.518166 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 14:26:28.528366 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 25 14:26:28.537287 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1077) Jun 25 14:26:28.555621 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 14:26:28.569857 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 14:26:28.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.609630 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 14:26:28.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.622495 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 14:26:28.630729 systemd-networkd[1082]: lo: Link UP Jun 25 14:26:28.630739 systemd-networkd[1082]: lo: Gained carrier Jun 25 14:26:28.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.633847 systemd-networkd[1082]: Enumeration completed Jun 25 14:26:28.633955 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:26:28.633961 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:26:28.633964 systemd-networkd[1082]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:26:28.635151 systemd-networkd[1082]: eth0: Link UP Jun 25 14:26:28.635154 systemd-networkd[1082]: eth0: Gained carrier Jun 25 14:26:28.635164 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:26:28.636589 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 14:26:28.638284 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:26:28.656365 systemd-networkd[1082]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 14:26:28.663093 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 14:26:28.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.664180 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:26:28.676449 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 14:26:28.680011 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:26:28.707162 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 14:26:28.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.708142 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:26:28.709016 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 14:26:28.709049 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:26:28.709798 systemd[1]: Reached target machines.target - Containers. Jun 25 14:26:28.720493 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 14:26:28.721465 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:26:28.721540 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:26:28.722941 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 14:26:28.725265 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 14:26:28.727590 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 14:26:28.729880 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 14:26:28.738001 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1112 (bootctl) Jun 25 14:26:28.740363 kernel: loop0: detected capacity change from 0 to 59648 Jun 25 14:26:28.745440 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 14:26:28.746552 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 14:26:28.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.790733 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 14:26:28.791360 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 14:26:28.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.797223 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 14:26:28.829933 systemd-fsck[1121]: fsck.fat 4.2 (2021-01-31) Jun 25 14:26:28.829933 systemd-fsck[1121]: /dev/vda1: 242 files, 114659/258078 clusters Jun 25 14:26:28.831721 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 14:26:28.832228 kernel: loop1: detected capacity change from 0 to 113264 Jun 25 14:26:28.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.839378 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 14:26:28.846309 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 14:26:28.855632 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 14:26:28.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.874227 kernel: loop2: detected capacity change from 0 to 193208 Jun 25 14:26:28.908227 kernel: loop3: detected capacity change from 0 to 59648 Jun 25 14:26:28.915518 kernel: loop4: detected capacity change from 0 to 113264 Jun 25 14:26:28.923257 kernel: loop5: detected capacity change from 0 to 193208 Jun 25 14:26:28.929922 (sd-sysext)[1125]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 14:26:28.930324 (sd-sysext)[1125]: Merged extensions into '/usr'. Jun 25 14:26:28.932034 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 14:26:28.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:28.940527 systemd[1]: Starting ensure-sysext.service... Jun 25 14:26:28.943362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:26:28.953699 systemd[1]: Reloading. Jun 25 14:26:28.955989 systemd-tmpfiles[1127]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 14:26:28.957087 systemd-tmpfiles[1127]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 14:26:28.957796 systemd-tmpfiles[1127]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 14:26:28.959331 systemd-tmpfiles[1127]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 14:26:28.997864 ldconfig[1111]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 14:26:29.082587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:26:29.135000 audit: BPF prog-id=24 op=LOAD Jun 25 14:26:29.136000 audit: BPF prog-id=20 op=UNLOAD Jun 25 14:26:29.136000 audit: BPF prog-id=25 op=LOAD Jun 25 14:26:29.136000 audit: BPF prog-id=26 op=LOAD Jun 25 14:26:29.136000 audit: BPF prog-id=18 op=UNLOAD Jun 25 14:26:29.137000 audit: BPF prog-id=19 op=UNLOAD Jun 25 14:26:29.137000 audit: BPF prog-id=27 op=LOAD Jun 25 14:26:29.137000 audit: BPF prog-id=21 op=UNLOAD Jun 25 14:26:29.138000 audit: BPF prog-id=28 op=LOAD Jun 25 14:26:29.138000 audit: BPF prog-id=29 op=LOAD Jun 25 14:26:29.138000 audit: BPF prog-id=22 op=UNLOAD Jun 25 14:26:29.138000 audit: BPF prog-id=23 op=UNLOAD Jun 25 14:26:29.139000 audit: BPF prog-id=30 op=LOAD Jun 25 14:26:29.139000 audit: BPF prog-id=15 op=UNLOAD Jun 25 14:26:29.139000 audit: BPF prog-id=31 op=LOAD Jun 25 14:26:29.139000 audit: BPF prog-id=32 op=LOAD Jun 25 14:26:29.139000 audit: BPF prog-id=16 op=UNLOAD Jun 25 14:26:29.139000 audit: BPF prog-id=17 op=UNLOAD Jun 25 14:26:29.143972 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 14:26:29.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.146349 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:26:29.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.150460 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:26:29.153296 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 14:26:29.155774 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 14:26:29.158000 audit: BPF prog-id=33 op=LOAD Jun 25 14:26:29.160359 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:26:29.163000 audit: BPF prog-id=34 op=LOAD Jun 25 14:26:29.164640 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 14:26:29.168403 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 14:26:29.172872 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:26:29.174000 audit[1192]: SYSTEM_BOOT pid=1192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.174624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:26:29.177167 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:26:29.179948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:26:29.181078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:26:29.181246 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:26:29.182367 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 14:26:29.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.183882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:26:29.184044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:26:29.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.185437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:26:29.185549 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:26:29.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.186941 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:26:29.187055 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:26:29.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.190059 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:26:29.190231 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:26:29.191762 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 14:26:29.194743 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 14:26:29.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.199751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:26:29.201493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:26:29.205863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:26:29.208806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:26:29.209755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:26:29.209903 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:26:29.210711 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:26:29.210847 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:26:29.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.212289 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 14:26:29.213667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:26:29.213814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:26:29.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.216548 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:26:29.216663 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:26:29.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.220088 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:26:29.222606 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:26:29.227466 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:26:29.229687 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:26:29.232397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:26:29.233386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:26:29.233553 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:26:29.233692 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 14:26:29.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.234814 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 14:26:29.236090 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:26:29.236221 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:26:29.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:29.238000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 14:26:29.238000 audit[1212]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff0a47880 a2=420 a3=0 items=0 ppid=1181 pid=1212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:29.238000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 14:26:29.238718 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:26:29.240900 augenrules[1212]: No rules Jun 25 14:26:29.238862 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:26:29.240270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:26:29.240406 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:26:29.240519 systemd-resolved[1185]: Positive Trust Anchors: Jun 25 14:26:29.240527 systemd-resolved[1185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:26:29.240554 systemd-resolved[1185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:26:29.241734 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:26:29.241852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:26:29.242990 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 14:26:29.721926 systemd-timesyncd[1191]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 14:26:29.722169 systemd-timesyncd[1191]: Initial clock synchronization to Tue 2024-06-25 14:26:29.721855 UTC. Jun 25 14:26:29.722889 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:26:29.725225 systemd-resolved[1185]: Defaulting to hostname 'linux'. Jun 25 14:26:29.725534 systemd[1]: Finished ensure-sysext.service. Jun 25 14:26:29.726219 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 14:26:29.727140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:26:29.727183 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:26:29.730486 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:26:29.731305 systemd[1]: Reached target network.target - Network. Jun 25 14:26:29.732010 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:26:29.732834 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:26:29.733659 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 14:26:29.734694 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 14:26:29.735660 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 14:26:29.736558 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 14:26:29.737323 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 14:26:29.738121 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 14:26:29.738150 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:26:29.738792 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:26:29.740172 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 14:26:29.742222 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 14:26:29.754203 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 14:26:29.755105 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:26:29.755573 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 14:26:29.756514 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:26:29.757176 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:26:29.757922 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:26:29.757950 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:26:29.759161 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 14:26:29.761202 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 14:26:29.763464 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 14:26:29.765727 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 14:26:29.766625 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 14:26:29.768540 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 14:26:29.771055 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 14:26:29.775083 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 14:26:29.776803 jq[1222]: false Jun 25 14:26:29.777901 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 14:26:29.782735 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 14:26:29.783917 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:26:29.783983 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 14:26:29.784447 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 14:26:29.786101 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 14:26:29.788710 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 14:26:29.794396 jq[1238]: true Jun 25 14:26:29.793940 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 14:26:29.794118 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 14:26:29.794455 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 14:26:29.794614 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 14:26:29.796741 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 14:26:29.798775 extend-filesystems[1223]: Found loop3 Jun 25 14:26:29.798775 extend-filesystems[1223]: Found loop4 Jun 25 14:26:29.798775 extend-filesystems[1223]: Found loop5 Jun 25 14:26:29.798775 extend-filesystems[1223]: Found vda Jun 25 14:26:29.798775 extend-filesystems[1223]: Found vda1 Jun 25 14:26:29.798775 extend-filesystems[1223]: Found vda2 Jun 25 14:26:29.798775 extend-filesystems[1223]: Found vda3 Jun 25 14:26:29.798775 extend-filesystems[1223]: Found usr Jun 25 14:26:29.798775 extend-filesystems[1223]: Found vda4 Jun 25 14:26:29.798775 extend-filesystems[1223]: Found vda6 Jun 25 14:26:29.798775 extend-filesystems[1223]: Found vda7 Jun 25 14:26:29.798775 extend-filesystems[1223]: Found vda9 Jun 25 14:26:29.798775 extend-filesystems[1223]: Checking size of /dev/vda9 Jun 25 14:26:29.796913 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 14:26:29.815257 jq[1243]: true Jun 25 14:26:29.817701 tar[1241]: linux-arm64/helm Jun 25 14:26:29.820299 dbus-daemon[1221]: [system] SELinux support is enabled Jun 25 14:26:29.820636 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 14:26:29.822474 update_engine[1236]: I0625 14:26:29.820376 1236 main.cc:92] Flatcar Update Engine starting Jun 25 14:26:29.823903 update_engine[1236]: I0625 14:26:29.823150 1236 update_check_scheduler.cc:74] Next update check in 3m42s Jun 25 14:26:29.823366 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 14:26:29.823396 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 14:26:29.824208 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 14:26:29.824230 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 14:26:29.825159 systemd[1]: Started update-engine.service - Update Engine. Jun 25 14:26:29.832685 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 14:26:29.842363 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1094) Jun 25 14:26:29.846496 extend-filesystems[1223]: Resized partition /dev/vda9 Jun 25 14:26:29.861681 extend-filesystems[1266]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 14:26:29.869364 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 14:26:29.883178 bash[1262]: Updated "/home/core/.ssh/authorized_keys" Jun 25 14:26:29.883959 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 14:26:29.885211 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 14:26:29.898920 locksmithd[1261]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 14:26:29.903368 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 14:26:29.915201 extend-filesystems[1266]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 14:26:29.915201 extend-filesystems[1266]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 14:26:29.915201 extend-filesystems[1266]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 14:26:29.927232 extend-filesystems[1223]: Resized filesystem in /dev/vda9 Jun 25 14:26:29.915299 systemd-logind[1231]: Watching system buttons on /dev/input/event0 (Power Button) Jun 25 14:26:29.916474 systemd-logind[1231]: New seat seat0. Jun 25 14:26:29.921053 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 14:26:29.925062 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 14:26:29.925230 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 14:26:30.046634 containerd[1244]: time="2024-06-25T14:26:30.046500127Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 14:26:30.073767 containerd[1244]: time="2024-06-25T14:26:30.073713887Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 14:26:30.073767 containerd[1244]: time="2024-06-25T14:26:30.073762927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:26:30.075588 containerd[1244]: time="2024-06-25T14:26:30.075546687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:26:30.075588 containerd[1244]: time="2024-06-25T14:26:30.075582967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:26:30.075919 containerd[1244]: time="2024-06-25T14:26:30.075891047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:26:30.075919 containerd[1244]: time="2024-06-25T14:26:30.075917167Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 14:26:30.076019 containerd[1244]: time="2024-06-25T14:26:30.076000967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 14:26:30.076076 containerd[1244]: time="2024-06-25T14:26:30.076055847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:26:30.076151 containerd[1244]: time="2024-06-25T14:26:30.076131887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 14:26:30.076228 containerd[1244]: time="2024-06-25T14:26:30.076212127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:26:30.076560 containerd[1244]: time="2024-06-25T14:26:30.076528887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 14:26:30.076597 containerd[1244]: time="2024-06-25T14:26:30.076562367Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 14:26:30.076597 containerd[1244]: time="2024-06-25T14:26:30.076575407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:26:30.076716 containerd[1244]: time="2024-06-25T14:26:30.076694207Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:26:30.076716 containerd[1244]: time="2024-06-25T14:26:30.076713127Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 14:26:30.076791 containerd[1244]: time="2024-06-25T14:26:30.076773327Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 14:26:30.076791 containerd[1244]: time="2024-06-25T14:26:30.076789647Z" level=info msg="metadata content store policy set" policy=shared Jun 25 14:26:30.079954 containerd[1244]: time="2024-06-25T14:26:30.079919607Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 14:26:30.079954 containerd[1244]: time="2024-06-25T14:26:30.079956287Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 14:26:30.080101 containerd[1244]: time="2024-06-25T14:26:30.079970847Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 14:26:30.080101 containerd[1244]: time="2024-06-25T14:26:30.080005567Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 14:26:30.080101 containerd[1244]: time="2024-06-25T14:26:30.080021447Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 14:26:30.080101 containerd[1244]: time="2024-06-25T14:26:30.080031847Z" level=info msg="NRI interface is disabled by configuration." Jun 25 14:26:30.080101 containerd[1244]: time="2024-06-25T14:26:30.080043967Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 14:26:30.080230 containerd[1244]: time="2024-06-25T14:26:30.080206487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 14:26:30.080260 containerd[1244]: time="2024-06-25T14:26:30.080231647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 14:26:30.080260 containerd[1244]: time="2024-06-25T14:26:30.080245487Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 14:26:30.080302 containerd[1244]: time="2024-06-25T14:26:30.080259607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 14:26:30.080302 containerd[1244]: time="2024-06-25T14:26:30.080274567Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 14:26:30.080302 containerd[1244]: time="2024-06-25T14:26:30.080290407Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 14:26:30.080383 containerd[1244]: time="2024-06-25T14:26:30.080303287Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 14:26:30.080383 containerd[1244]: time="2024-06-25T14:26:30.080316407Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 14:26:30.080383 containerd[1244]: time="2024-06-25T14:26:30.080329767Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 14:26:30.080383 containerd[1244]: time="2024-06-25T14:26:30.080360687Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 14:26:30.080383 containerd[1244]: time="2024-06-25T14:26:30.080374567Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 14:26:30.080499 containerd[1244]: time="2024-06-25T14:26:30.080386767Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 14:26:30.080521 containerd[1244]: time="2024-06-25T14:26:30.080500087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 14:26:30.080819 containerd[1244]: time="2024-06-25T14:26:30.080791487Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 14:26:30.080856 containerd[1244]: time="2024-06-25T14:26:30.080830927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.080856 containerd[1244]: time="2024-06-25T14:26:30.080847407Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 14:26:30.080899 containerd[1244]: time="2024-06-25T14:26:30.080869007Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 14:26:30.081052 containerd[1244]: time="2024-06-25T14:26:30.081032407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081090 containerd[1244]: time="2024-06-25T14:26:30.081056287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081090 containerd[1244]: time="2024-06-25T14:26:30.081070567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081090 containerd[1244]: time="2024-06-25T14:26:30.081081527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081146 containerd[1244]: time="2024-06-25T14:26:30.081094087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081146 containerd[1244]: time="2024-06-25T14:26:30.081107447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081146 containerd[1244]: time="2024-06-25T14:26:30.081119047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081146 containerd[1244]: time="2024-06-25T14:26:30.081130047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081146 containerd[1244]: time="2024-06-25T14:26:30.081143927Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 14:26:30.081282 containerd[1244]: time="2024-06-25T14:26:30.081262967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081315 containerd[1244]: time="2024-06-25T14:26:30.081284007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081315 containerd[1244]: time="2024-06-25T14:26:30.081297207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081315 containerd[1244]: time="2024-06-25T14:26:30.081309247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081387 containerd[1244]: time="2024-06-25T14:26:30.081320967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081387 containerd[1244]: time="2024-06-25T14:26:30.081340287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081387 containerd[1244]: time="2024-06-25T14:26:30.081365847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081387 containerd[1244]: time="2024-06-25T14:26:30.081376567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 14:26:30.081673 containerd[1244]: time="2024-06-25T14:26:30.081612567Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 14:26:30.082024 containerd[1244]: time="2024-06-25T14:26:30.081755087Z" level=info msg="Connect containerd service" Jun 25 14:26:30.082024 containerd[1244]: time="2024-06-25T14:26:30.081794407Z" level=info msg="using legacy CRI server" Jun 25 14:26:30.082024 containerd[1244]: time="2024-06-25T14:26:30.081801407Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 14:26:30.082024 containerd[1244]: time="2024-06-25T14:26:30.081917127Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 14:26:30.082842 containerd[1244]: time="2024-06-25T14:26:30.082810007Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:26:30.084129 containerd[1244]: time="2024-06-25T14:26:30.084084527Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 14:26:30.084171 containerd[1244]: time="2024-06-25T14:26:30.084134407Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 14:26:30.084171 containerd[1244]: time="2024-06-25T14:26:30.084154447Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 14:26:30.084246 containerd[1244]: time="2024-06-25T14:26:30.084170567Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 14:26:30.085233 containerd[1244]: time="2024-06-25T14:26:30.085172407Z" level=info msg="Start subscribing containerd event" Jun 25 14:26:30.085292 containerd[1244]: time="2024-06-25T14:26:30.085248047Z" level=info msg="Start recovering state" Jun 25 14:26:30.085448 containerd[1244]: time="2024-06-25T14:26:30.085428207Z" level=info msg="Start event monitor" Jun 25 14:26:30.085542 containerd[1244]: time="2024-06-25T14:26:30.085430727Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 14:26:30.085577 containerd[1244]: time="2024-06-25T14:26:30.085550847Z" level=info msg="Start snapshots syncer" Jun 25 14:26:30.085577 containerd[1244]: time="2024-06-25T14:26:30.085567927Z" level=info msg="Start cni network conf syncer for default" Jun 25 14:26:30.085630 containerd[1244]: time="2024-06-25T14:26:30.085579287Z" level=info msg="Start streaming server" Jun 25 14:26:30.085772 containerd[1244]: time="2024-06-25T14:26:30.085579967Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 14:26:30.085901 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 14:26:30.086813 containerd[1244]: time="2024-06-25T14:26:30.086769647Z" level=info msg="containerd successfully booted in 0.041158s" Jun 25 14:26:30.220371 tar[1241]: linux-arm64/LICENSE Jun 25 14:26:30.220560 tar[1241]: linux-arm64/README.md Jun 25 14:26:30.232850 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 14:26:30.898605 systemd-networkd[1082]: eth0: Gained IPv6LL Jun 25 14:26:30.900596 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 14:26:30.901692 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 14:26:30.908913 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 14:26:30.911543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:26:30.913711 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 14:26:30.921660 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 14:26:30.921819 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 14:26:30.922957 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 14:26:30.928390 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 14:26:31.379900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:26:31.850165 kubelet[1295]: E0625 14:26:31.850032 1295 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:26:31.852482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:26:31.852629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:26:31.901455 sshd_keygen[1239]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 14:26:31.919592 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 14:26:31.932650 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 14:26:31.937336 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 14:26:31.937549 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 14:26:31.940052 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 14:26:31.947941 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 14:26:31.961791 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 14:26:31.964041 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 14:26:31.965119 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 14:26:31.966062 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 14:26:31.968238 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 14:26:31.974594 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 14:26:31.974736 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 14:26:31.975650 systemd[1]: Startup finished in 549ms (kernel) + 4.612s (initrd) + 4.203s (userspace) = 9.365s. Jun 25 14:26:35.638920 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 14:26:35.640239 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:57248.service - OpenSSH per-connection server daemon (10.0.0.1:57248). Jun 25 14:26:35.689391 sshd[1318]: Accepted publickey for core from 10.0.0.1 port 57248 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:26:35.691090 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:26:35.702708 systemd-logind[1231]: New session 1 of user core. Jun 25 14:26:35.703677 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 14:26:35.711671 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 14:26:35.723582 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 14:26:35.725186 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 14:26:35.728123 (systemd)[1321]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:26:35.799292 systemd[1321]: Queued start job for default target default.target. Jun 25 14:26:35.810771 systemd[1321]: Reached target paths.target - Paths. Jun 25 14:26:35.810790 systemd[1321]: Reached target sockets.target - Sockets. Jun 25 14:26:35.810801 systemd[1321]: Reached target timers.target - Timers. Jun 25 14:26:35.810810 systemd[1321]: Reached target basic.target - Basic System. Jun 25 14:26:35.810867 systemd[1321]: Reached target default.target - Main User Target. Jun 25 14:26:35.810893 systemd[1321]: Startup finished in 76ms. Jun 25 14:26:35.810952 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 14:26:35.812103 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 14:26:35.873107 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:57260.service - OpenSSH per-connection server daemon (10.0.0.1:57260). Jun 25 14:26:35.903004 sshd[1330]: Accepted publickey for core from 10.0.0.1 port 57260 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:26:35.904107 sshd[1330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:26:35.907802 systemd-logind[1231]: New session 2 of user core. Jun 25 14:26:35.917510 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 14:26:35.971206 sshd[1330]: pam_unix(sshd:session): session closed for user core Jun 25 14:26:35.984600 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:57260.service: Deactivated successfully. Jun 25 14:26:35.985204 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 14:26:35.985779 systemd-logind[1231]: Session 2 logged out. Waiting for processes to exit. Jun 25 14:26:35.987008 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:57270.service - OpenSSH per-connection server daemon (10.0.0.1:57270). Jun 25 14:26:35.987857 systemd-logind[1231]: Removed session 2. Jun 25 14:26:36.016425 sshd[1336]: Accepted publickey for core from 10.0.0.1 port 57270 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:26:36.017631 sshd[1336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:26:36.021423 systemd-logind[1231]: New session 3 of user core. Jun 25 14:26:36.035545 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 14:26:36.087223 sshd[1336]: pam_unix(sshd:session): session closed for user core Jun 25 14:26:36.100448 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:57270.service: Deactivated successfully. Jun 25 14:26:36.101048 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 14:26:36.101557 systemd-logind[1231]: Session 3 logged out. Waiting for processes to exit. Jun 25 14:26:36.102760 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:57278.service - OpenSSH per-connection server daemon (10.0.0.1:57278). Jun 25 14:26:36.103389 systemd-logind[1231]: Removed session 3. Jun 25 14:26:36.131937 sshd[1342]: Accepted publickey for core from 10.0.0.1 port 57278 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:26:36.133180 sshd[1342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:26:36.136551 systemd-logind[1231]: New session 4 of user core. Jun 25 14:26:36.148528 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 14:26:36.203100 sshd[1342]: pam_unix(sshd:session): session closed for user core Jun 25 14:26:36.217791 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:57278.service: Deactivated successfully. Jun 25 14:26:36.218772 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 14:26:36.219609 systemd-logind[1231]: Session 4 logged out. Waiting for processes to exit. Jun 25 14:26:36.220773 systemd-logind[1231]: Removed session 4. Jun 25 14:26:36.233154 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:57294.service - OpenSSH per-connection server daemon (10.0.0.1:57294). Jun 25 14:26:36.261782 sshd[1348]: Accepted publickey for core from 10.0.0.1 port 57294 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:26:36.262960 sshd[1348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:26:36.266287 systemd-logind[1231]: New session 5 of user core. Jun 25 14:26:36.278744 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 14:26:36.344621 sudo[1351]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 14:26:36.345140 sudo[1351]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:26:36.365806 sudo[1351]: pam_unix(sudo:session): session closed for user root Jun 25 14:26:36.367802 sshd[1348]: pam_unix(sshd:session): session closed for user core Jun 25 14:26:36.376641 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:57294.service: Deactivated successfully. Jun 25 14:26:36.377255 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 14:26:36.377935 systemd-logind[1231]: Session 5 logged out. Waiting for processes to exit. Jun 25 14:26:36.379230 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:57302.service - OpenSSH per-connection server daemon (10.0.0.1:57302). Jun 25 14:26:36.380023 systemd-logind[1231]: Removed session 5. Jun 25 14:26:36.410888 sshd[1355]: Accepted publickey for core from 10.0.0.1 port 57302 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:26:36.412716 sshd[1355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:26:36.417107 systemd-logind[1231]: New session 6 of user core. Jun 25 14:26:36.427594 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 14:26:36.480071 sudo[1359]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 14:26:36.480305 sudo[1359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:26:36.483150 sudo[1359]: pam_unix(sudo:session): session closed for user root Jun 25 14:26:36.487435 sudo[1358]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 14:26:36.487661 sudo[1358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:26:36.504705 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 14:26:36.511391 kernel: kauditd_printk_skb: 119 callbacks suppressed Jun 25 14:26:36.511461 kernel: audit: type=1305 audit(1719325596.504:189): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:26:36.511479 kernel: audit: type=1300 audit(1719325596.504:189): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc02ef230 a2=420 a3=0 items=0 ppid=1 pid=1362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:36.511512 kernel: audit: type=1327 audit(1719325596.504:189): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:26:36.511527 kernel: audit: type=1131 audit(1719325596.505:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.504000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:26:36.504000 audit[1362]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc02ef230 a2=420 a3=0 items=0 ppid=1 pid=1362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:36.504000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:26:36.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.511682 auditctl[1362]: No rules Jun 25 14:26:36.506388 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 14:26:36.506554 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 14:26:36.508154 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:26:36.527665 augenrules[1379]: No rules Jun 25 14:26:36.528529 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:26:36.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.529390 sudo[1358]: pam_unix(sudo:session): session closed for user root Jun 25 14:26:36.528000 audit[1358]: USER_END pid=1358 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.533275 kernel: audit: type=1130 audit(1719325596.527:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.533308 kernel: audit: type=1106 audit(1719325596.528:192): pid=1358 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.528000 audit[1358]: CRED_DISP pid=1358 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.535736 kernel: audit: type=1104 audit(1719325596.528:193): pid=1358 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.536122 sshd[1355]: pam_unix(sshd:session): session closed for user core Jun 25 14:26:36.535000 audit[1355]: USER_END pid=1355 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:26:36.540296 kernel: audit: type=1106 audit(1719325596.535:194): pid=1355 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:26:36.540368 kernel: audit: type=1104 audit(1719325596.535:195): pid=1355 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:26:36.535000 audit[1355]: CRED_DISP pid=1355 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:26:36.546734 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:57302.service: Deactivated successfully. Jun 25 14:26:36.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.85:22-10.0.0.1:57302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.547438 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 14:26:36.548034 systemd-logind[1231]: Session 6 logged out. Waiting for processes to exit. Jun 25 14:26:36.549402 kernel: audit: type=1131 audit(1719325596.545:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.85:22-10.0.0.1:57302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.549296 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:57318.service - OpenSSH per-connection server daemon (10.0.0.1:57318). Jun 25 14:26:36.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:57318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.550251 systemd-logind[1231]: Removed session 6. Jun 25 14:26:36.578000 audit[1385]: USER_ACCT pid=1385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:26:36.580285 sshd[1385]: Accepted publickey for core from 10.0.0.1 port 57318 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:26:36.579000 audit[1385]: CRED_ACQ pid=1385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:26:36.580000 audit[1385]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe1a3ecd0 a2=3 a3=1 items=0 ppid=1 pid=1385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:36.580000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:26:36.581882 sshd[1385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:26:36.585989 systemd-logind[1231]: New session 7 of user core. Jun 25 14:26:36.595537 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 14:26:36.597000 audit[1385]: USER_START pid=1385 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:26:36.599000 audit[1387]: CRED_ACQ pid=1387 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:26:36.646000 audit[1388]: USER_ACCT pid=1388 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.647984 sudo[1388]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 14:26:36.646000 audit[1388]: CRED_REFR pid=1388 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.648217 sudo[1388]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:26:36.648000 audit[1388]: USER_START pid=1388 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:26:36.771777 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 14:26:37.135304 dockerd[1398]: time="2024-06-25T14:26:37.135175087Z" level=info msg="Starting up" Jun 25 14:26:37.239415 dockerd[1398]: time="2024-06-25T14:26:37.239366967Z" level=info msg="Loading containers: start." Jun 25 14:26:37.286000 audit[1434]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.286000 audit[1434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc8300eb0 a2=0 a3=1 items=0 ppid=1398 pid=1434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.286000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 14:26:37.289000 audit[1436]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.289000 audit[1436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd0cab850 a2=0 a3=1 items=0 ppid=1398 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.289000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 14:26:37.291000 audit[1438]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.291000 audit[1438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcf8b0f40 a2=0 a3=1 items=0 ppid=1398 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.291000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:26:37.293000 audit[1440]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.293000 audit[1440]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe7dfa980 a2=0 a3=1 items=0 ppid=1398 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.293000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:26:37.300000 audit[1442]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.300000 audit[1442]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdfe4c420 a2=0 a3=1 items=0 ppid=1398 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.300000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 14:26:37.302000 audit[1444]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.302000 audit[1444]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffded827d0 a2=0 a3=1 items=0 ppid=1398 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.302000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 14:26:37.310000 audit[1446]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.310000 audit[1446]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc3675a10 a2=0 a3=1 items=0 ppid=1398 pid=1446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.310000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 14:26:37.312000 audit[1448]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.312000 audit[1448]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe469b630 a2=0 a3=1 items=0 ppid=1398 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.312000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 14:26:37.314000 audit[1450]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.314000 audit[1450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffe657e120 a2=0 a3=1 items=0 ppid=1398 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.314000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:26:37.321000 audit[1454]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.321000 audit[1454]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe7d16830 a2=0 a3=1 items=0 ppid=1398 pid=1454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.321000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:26:37.322000 audit[1455]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.322000 audit[1455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc95e7ae0 a2=0 a3=1 items=0 ppid=1398 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.322000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:26:37.330363 kernel: Initializing XFRM netlink socket Jun 25 14:26:37.355000 audit[1463]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.355000 audit[1463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffcdfc89a0 a2=0 a3=1 items=0 ppid=1398 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.355000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 14:26:37.367000 audit[1466]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.367000 audit[1466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd7f56960 a2=0 a3=1 items=0 ppid=1398 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.367000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 14:26:37.372000 audit[1470]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.372000 audit[1470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff7e93b40 a2=0 a3=1 items=0 ppid=1398 pid=1470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.372000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 14:26:37.373000 audit[1472]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.373000 audit[1472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff8cb84d0 a2=0 a3=1 items=0 ppid=1398 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.373000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 14:26:37.375000 audit[1474]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.375000 audit[1474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffcfecac50 a2=0 a3=1 items=0 ppid=1398 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.375000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 14:26:37.377000 audit[1476]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.377000 audit[1476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffff902d370 a2=0 a3=1 items=0 ppid=1398 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.377000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 14:26:37.379000 audit[1478]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.379000 audit[1478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffc1203640 a2=0 a3=1 items=0 ppid=1398 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.379000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 14:26:37.385000 audit[1481]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.385000 audit[1481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffff9993f90 a2=0 a3=1 items=0 ppid=1398 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.385000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 14:26:37.387000 audit[1483]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.387000 audit[1483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffc2587e30 a2=0 a3=1 items=0 ppid=1398 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.387000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:26:37.390000 audit[1485]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.390000 audit[1485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe054a010 a2=0 a3=1 items=0 ppid=1398 pid=1485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.390000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:26:37.392000 audit[1487]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.392000 audit[1487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcd081150 a2=0 a3=1 items=0 ppid=1398 pid=1487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.392000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 14:26:37.393946 systemd-networkd[1082]: docker0: Link UP Jun 25 14:26:37.401000 audit[1491]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.401000 audit[1491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc312a520 a2=0 a3=1 items=0 ppid=1398 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.401000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:26:37.402000 audit[1492]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:26:37.402000 audit[1492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdde425b0 a2=0 a3=1 items=0 ppid=1398 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:26:37.402000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:26:37.404033 dockerd[1398]: time="2024-06-25T14:26:37.403985487Z" level=info msg="Loading containers: done." Jun 25 14:26:37.473204 dockerd[1398]: time="2024-06-25T14:26:37.473160167Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 14:26:37.473610 dockerd[1398]: time="2024-06-25T14:26:37.473585167Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 14:26:37.473803 dockerd[1398]: time="2024-06-25T14:26:37.473783527Z" level=info msg="Daemon has completed initialization" Jun 25 14:26:37.500579 dockerd[1398]: time="2024-06-25T14:26:37.500437327Z" level=info msg="API listen on /run/docker.sock" Jun 25 14:26:37.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:37.500594 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 14:26:38.076616 containerd[1244]: time="2024-06-25T14:26:38.076564687Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 14:26:38.218677 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3087492355-merged.mount: Deactivated successfully. Jun 25 14:26:38.593355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017712597.mount: Deactivated successfully. Jun 25 14:26:40.482443 containerd[1244]: time="2024-06-25T14:26:40.482392807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:40.483649 containerd[1244]: time="2024-06-25T14:26:40.483619047Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671540" Jun 25 14:26:40.484250 containerd[1244]: time="2024-06-25T14:26:40.484224407Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:40.488996 containerd[1244]: time="2024-06-25T14:26:40.488967767Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:40.490936 containerd[1244]: time="2024-06-25T14:26:40.490908207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:40.493125 containerd[1244]: time="2024-06-25T14:26:40.493071207Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 2.41645856s" Jun 25 14:26:40.493125 containerd[1244]: time="2024-06-25T14:26:40.493119127Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jun 25 14:26:40.511838 containerd[1244]: time="2024-06-25T14:26:40.511799687Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 14:26:41.886118 containerd[1244]: time="2024-06-25T14:26:41.886052087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:41.886783 containerd[1244]: time="2024-06-25T14:26:41.886729007Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893120" Jun 25 14:26:41.888169 containerd[1244]: time="2024-06-25T14:26:41.888113687Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:41.890442 containerd[1244]: time="2024-06-25T14:26:41.890394167Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:41.893041 containerd[1244]: time="2024-06-25T14:26:41.892926247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:41.894095 containerd[1244]: time="2024-06-25T14:26:41.894050287Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.38207664s" Jun 25 14:26:41.894095 containerd[1244]: time="2024-06-25T14:26:41.894091927Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jun 25 14:26:41.915861 containerd[1244]: time="2024-06-25T14:26:41.915821887Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 14:26:41.922991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 14:26:41.923165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:26:41.926904 kernel: kauditd_printk_skb: 84 callbacks suppressed Jun 25 14:26:41.926989 kernel: audit: type=1130 audit(1719325601.921:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:41.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:41.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:41.930954 kernel: audit: type=1131 audit(1719325601.921:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:41.936782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:26:42.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:42.042990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:26:42.047398 kernel: audit: type=1130 audit(1719325602.041:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:42.095810 kubelet[1615]: E0625 14:26:42.095763 1615 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:26:42.098693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:26:42.098833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:26:42.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:26:42.101365 kernel: audit: type=1131 audit(1719325602.097:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:26:42.851171 containerd[1244]: time="2024-06-25T14:26:42.851099567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:42.851830 containerd[1244]: time="2024-06-25T14:26:42.851778807Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358440" Jun 25 14:26:42.853059 containerd[1244]: time="2024-06-25T14:26:42.853020167Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:42.855087 containerd[1244]: time="2024-06-25T14:26:42.855030407Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:42.856979 containerd[1244]: time="2024-06-25T14:26:42.856930247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:26:42.858354 containerd[1244]: time="2024-06-25T14:26:42.858308287Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 942.27808ms" Jun 25 14:26:42.858407 containerd[1244]: time="2024-06-25T14:26:42.858361927Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jun 25 14:26:42.877566 containerd[1244]: time="2024-06-25T14:26:42.877531287Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 14:26:43.805254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114290270.mount: Deactivated successfully. Jun 25 14:26:52.173017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 14:26:52.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:52.173188 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:26:52.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:52.177696 kernel: audit: type=1130 audit(1719325612.171:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:52.177760 kernel: audit: type=1131 audit(1719325612.171:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:52.179729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:26:52.282569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:26:52.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:52.285376 kernel: audit: type=1130 audit(1719325612.281:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:26:52.339138 kubelet[1642]: E0625 14:26:52.339082 1642 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:26:52.341271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:26:52.341424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:26:52.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:26:52.344420 kernel: audit: type=1131 audit(1719325612.340:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:27:02.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:02.423090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 14:27:02.436119 kernel: audit: type=1130 audit(1719325622.421:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:02.436157 kernel: audit: type=1131 audit(1719325622.421:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:02.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:02.423263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:27:02.435672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:27:02.555491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:27:02.559419 kernel: audit: type=1130 audit(1719325622.553:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:02.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:02.608890 kubelet[1655]: E0625 14:27:02.608844 1655 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:27:02.613553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:27:02.613894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:27:02.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:27:02.617370 kernel: audit: type=1131 audit(1719325622.612:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:27:05.235216 containerd[1244]: time="2024-06-25T14:27:05.235153582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:05.235704 containerd[1244]: time="2024-06-25T14:27:05.235656983Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772463" Jun 25 14:27:05.236574 containerd[1244]: time="2024-06-25T14:27:05.236535547Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:05.238536 containerd[1244]: time="2024-06-25T14:27:05.238499033Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:05.239862 containerd[1244]: time="2024-06-25T14:27:05.239810958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:05.240713 containerd[1244]: time="2024-06-25T14:27:05.240672721Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 22.363099114s" Jun 25 14:27:05.240776 containerd[1244]: time="2024-06-25T14:27:05.240711361Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jun 25 14:27:05.260744 containerd[1244]: time="2024-06-25T14:27:05.260697151Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 14:27:05.716478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount137625779.mount: Deactivated successfully. Jun 25 14:27:05.721557 containerd[1244]: time="2024-06-25T14:27:05.721508598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:05.722328 containerd[1244]: time="2024-06-25T14:27:05.722288200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jun 25 14:27:05.722878 containerd[1244]: time="2024-06-25T14:27:05.722838162Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:05.724889 containerd[1244]: time="2024-06-25T14:27:05.724852209Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:05.727034 containerd[1244]: time="2024-06-25T14:27:05.726992017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:05.727948 containerd[1244]: time="2024-06-25T14:27:05.727919740Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 467.177509ms" Jun 25 14:27:05.728028 containerd[1244]: time="2024-06-25T14:27:05.727952180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 14:27:05.746883 containerd[1244]: time="2024-06-25T14:27:05.746845566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 14:27:06.210328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3861020306.mount: Deactivated successfully. Jun 25 14:27:07.588013 containerd[1244]: time="2024-06-25T14:27:07.587961642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:07.588468 containerd[1244]: time="2024-06-25T14:27:07.588423124Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jun 25 14:27:07.589718 containerd[1244]: time="2024-06-25T14:27:07.589685008Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:07.592003 containerd[1244]: time="2024-06-25T14:27:07.591969335Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:07.594228 containerd[1244]: time="2024-06-25T14:27:07.594195061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:07.596677 containerd[1244]: time="2024-06-25T14:27:07.596628189Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.849739623s" Jun 25 14:27:07.597006 containerd[1244]: time="2024-06-25T14:27:07.596678229Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jun 25 14:27:07.616365 containerd[1244]: time="2024-06-25T14:27:07.616303049Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 14:27:08.217681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782687854.mount: Deactivated successfully. Jun 25 14:27:08.571386 containerd[1244]: time="2024-06-25T14:27:08.571034588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:08.571904 containerd[1244]: time="2024-06-25T14:27:08.571865230Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Jun 25 14:27:08.572900 containerd[1244]: time="2024-06-25T14:27:08.572861113Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:08.574821 containerd[1244]: time="2024-06-25T14:27:08.574789079Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:08.576774 containerd[1244]: time="2024-06-25T14:27:08.576743524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:08.577510 containerd[1244]: time="2024-06-25T14:27:08.577477767Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 961.107678ms" Jun 25 14:27:08.577606 containerd[1244]: time="2024-06-25T14:27:08.577585807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jun 25 14:27:12.673018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 14:27:12.673189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:27:12.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:12.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:12.676973 kernel: audit: type=1130 audit(1719325632.671:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:12.677030 kernel: audit: type=1131 audit(1719325632.671:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:12.684714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:27:12.778049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:27:12.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:12.781357 kernel: audit: type=1130 audit(1719325632.776:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:12.829581 kubelet[1822]: E0625 14:27:12.829527 1822 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:27:12.831523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:27:12.831656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:27:12.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:27:12.834376 kernel: audit: type=1131 audit(1719325632.830:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:27:13.428625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:27:13.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:13.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:13.433175 kernel: audit: type=1130 audit(1719325633.427:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:13.433230 kernel: audit: type=1131 audit(1719325633.427:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:13.439045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:27:13.455985 systemd[1]: Reloading. Jun 25 14:27:13.684399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:27:13.729000 audit: BPF prog-id=38 op=LOAD Jun 25 14:27:13.730000 audit: BPF prog-id=33 op=UNLOAD Jun 25 14:27:13.732429 kernel: audit: type=1334 audit(1719325633.729:249): prog-id=38 op=LOAD Jun 25 14:27:13.732469 kernel: audit: type=1334 audit(1719325633.730:250): prog-id=33 op=UNLOAD Jun 25 14:27:13.731000 audit: BPF prog-id=39 op=LOAD Jun 25 14:27:13.733783 kernel: audit: type=1334 audit(1719325633.731:251): prog-id=39 op=LOAD Jun 25 14:27:13.732000 audit: BPF prog-id=24 op=UNLOAD Jun 25 14:27:13.734409 kernel: audit: type=1334 audit(1719325633.732:252): prog-id=24 op=UNLOAD Jun 25 14:27:13.733000 audit: BPF prog-id=40 op=LOAD Jun 25 14:27:13.733000 audit: BPF prog-id=41 op=LOAD Jun 25 14:27:13.733000 audit: BPF prog-id=25 op=UNLOAD Jun 25 14:27:13.733000 audit: BPF prog-id=26 op=UNLOAD Jun 25 14:27:13.734000 audit: BPF prog-id=42 op=LOAD Jun 25 14:27:13.734000 audit: BPF prog-id=27 op=UNLOAD Jun 25 14:27:13.734000 audit: BPF prog-id=43 op=LOAD Jun 25 14:27:13.734000 audit: BPF prog-id=44 op=LOAD Jun 25 14:27:13.734000 audit: BPF prog-id=28 op=UNLOAD Jun 25 14:27:13.734000 audit: BPF prog-id=29 op=UNLOAD Jun 25 14:27:13.735000 audit: BPF prog-id=45 op=LOAD Jun 25 14:27:13.735000 audit: BPF prog-id=34 op=UNLOAD Jun 25 14:27:13.736000 audit: BPF prog-id=46 op=LOAD Jun 25 14:27:13.736000 audit: BPF prog-id=30 op=UNLOAD Jun 25 14:27:13.736000 audit: BPF prog-id=47 op=LOAD Jun 25 14:27:13.736000 audit: BPF prog-id=48 op=LOAD Jun 25 14:27:13.736000 audit: BPF prog-id=31 op=UNLOAD Jun 25 14:27:13.736000 audit: BPF prog-id=32 op=UNLOAD Jun 25 14:27:13.738000 audit: BPF prog-id=49 op=LOAD Jun 25 14:27:13.738000 audit: BPF prog-id=35 op=UNLOAD Jun 25 14:27:13.738000 audit: BPF prog-id=50 op=LOAD Jun 25 14:27:13.738000 audit: BPF prog-id=51 op=LOAD Jun 25 14:27:13.738000 audit: BPF prog-id=36 op=UNLOAD Jun 25 14:27:13.738000 audit: BPF prog-id=37 op=UNLOAD Jun 25 14:27:13.771498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:27:13.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:13.774609 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:27:13.775090 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:27:13.775321 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:27:13.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:13.777520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:27:13.872821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:27:13.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:13.919536 kubelet[1896]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:27:13.919536 kubelet[1896]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:27:13.919536 kubelet[1896]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:27:13.919927 kubelet[1896]: I0625 14:27:13.919607 1896 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:27:14.626440 update_engine[1236]: I0625 14:27:14.626389 1236 update_attempter.cc:509] Updating boot flags... Jun 25 14:27:14.798371 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1911) Jun 25 14:27:14.830370 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1915) Jun 25 14:27:14.860364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1915) Jun 25 14:27:15.206767 kubelet[1896]: I0625 14:27:15.206722 1896 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 14:27:15.206767 kubelet[1896]: I0625 14:27:15.206755 1896 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:27:15.207082 kubelet[1896]: I0625 14:27:15.206959 1896 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 14:27:15.237645 kubelet[1896]: I0625 14:27:15.237605 1896 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:27:15.239736 kubelet[1896]: E0625 14:27:15.239702 1896 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:15.245853 kubelet[1896]: W0625 14:27:15.245820 1896 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 14:27:15.247156 kubelet[1896]: I0625 14:27:15.247130 1896 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:27:15.247384 kubelet[1896]: I0625 14:27:15.247365 1896 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:27:15.247564 kubelet[1896]: I0625 14:27:15.247543 1896 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:27:15.247645 kubelet[1896]: I0625 14:27:15.247574 1896 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:27:15.247645 kubelet[1896]: I0625 14:27:15.247585 1896 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:27:15.247784 kubelet[1896]: I0625 14:27:15.247760 1896 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:27:15.248893 kubelet[1896]: I0625 14:27:15.248872 1896 kubelet.go:393] "Attempting to sync node with API server" Jun 25 14:27:15.248929 kubelet[1896]: I0625 14:27:15.248899 1896 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:27:15.249003 kubelet[1896]: I0625 14:27:15.248984 1896 kubelet.go:309] "Adding apiserver pod source" Jun 25 14:27:15.249003 kubelet[1896]: I0625 14:27:15.249002 1896 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:27:15.251518 kubelet[1896]: W0625 14:27:15.251468 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:15.251558 kubelet[1896]: E0625 14:27:15.251529 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:15.251616 kubelet[1896]: W0625 14:27:15.251586 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:15.251616 kubelet[1896]: E0625 14:27:15.251616 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:15.252087 kubelet[1896]: I0625 14:27:15.252053 1896 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:27:15.253404 kubelet[1896]: W0625 14:27:15.253382 1896 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 14:27:15.254045 kubelet[1896]: I0625 14:27:15.254020 1896 server.go:1232] "Started kubelet" Jun 25 14:27:15.254707 kubelet[1896]: I0625 14:27:15.254682 1896 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:27:15.255492 kubelet[1896]: I0625 14:27:15.255468 1896 server.go:462] "Adding debug handlers to kubelet server" Jun 25 14:27:15.257293 kubelet[1896]: I0625 14:27:15.257270 1896 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:27:15.257559 kubelet[1896]: E0625 14:27:15.257533 1896 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 14:27:15.257559 kubelet[1896]: I0625 14:27:15.257542 1896 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 14:27:15.257559 kubelet[1896]: E0625 14:27:15.257562 1896 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:27:15.257805 kubelet[1896]: I0625 14:27:15.257779 1896 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:27:15.260040 kubelet[1896]: E0625 14:27:15.259663 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 14:27:15.260040 kubelet[1896]: I0625 14:27:15.259712 1896 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:27:15.260040 kubelet[1896]: I0625 14:27:15.259812 1896 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:27:15.260040 kubelet[1896]: I0625 14:27:15.259879 1896 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:27:15.260289 kubelet[1896]: E0625 14:27:15.260251 1896 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" Jun 25 14:27:15.260341 kubelet[1896]: W0625 14:27:15.260266 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:15.260397 kubelet[1896]: E0625 14:27:15.260357 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:15.261584 kubelet[1896]: E0625 14:27:15.261427 1896 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc458e4e899c04", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 14, 27, 15, 254000644, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 14, 27, 15, 254000644, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.85:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.85:6443: connect: connection refused'(may retry after sleeping) Jun 25 14:27:15.260000 audit[1923]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:15.260000 audit[1923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe3c8be20 a2=0 a3=1 items=0 ppid=1896 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.260000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:27:15.262000 audit[1924]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:15.262000 audit[1924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3adaf50 a2=0 a3=1 items=0 ppid=1896 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:27:15.264000 audit[1926]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:15.264000 audit[1926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd576d7c0 a2=0 a3=1 items=0 ppid=1896 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:27:15.267000 audit[1930]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:15.267000 audit[1930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffaa16cc0 a2=0 a3=1 items=0 ppid=1896 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:27:15.276000 audit[1933]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:15.276000 audit[1933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc8425600 a2=0 a3=1 items=0 ppid=1896 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 14:27:15.277724 kubelet[1896]: I0625 14:27:15.277682 1896 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:27:15.277000 audit[1935]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:15.277000 audit[1935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd8752c70 a2=0 a3=1 items=0 ppid=1896 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:27:15.279037 kubelet[1896]: I0625 14:27:15.279011 1896 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:27:15.279037 kubelet[1896]: I0625 14:27:15.279037 1896 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:27:15.279167 kubelet[1896]: I0625 14:27:15.279054 1896 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 14:27:15.279167 kubelet[1896]: E0625 14:27:15.279106 1896 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:27:15.278000 audit[1934]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:15.278000 audit[1934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff590a0e0 a2=0 a3=1 items=0 ppid=1896 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:27:15.279764 kubelet[1896]: W0625 14:27:15.279674 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:15.279840 kubelet[1896]: E0625 14:27:15.279777 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:15.279000 audit[1938]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:15.279000 audit[1938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9d43300 a2=0 a3=1 items=0 ppid=1896 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:27:15.281671 kubelet[1896]: I0625 14:27:15.281619 1896 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:27:15.281671 kubelet[1896]: I0625 14:27:15.281645 1896 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:27:15.281671 kubelet[1896]: I0625 14:27:15.281665 1896 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:27:15.280000 audit[1939]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:15.280000 audit[1939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd330c3d0 a2=0 a3=1 items=0 ppid=1896 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:27:15.281000 audit[1940]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:15.281000 audit[1940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffdfab2570 a2=0 a3=1 items=0 ppid=1896 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.281000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:27:15.281000 audit[1941]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:15.281000 audit[1941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcb508a10 a2=0 a3=1 items=0 ppid=1896 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:27:15.283889 kubelet[1896]: I0625 14:27:15.283867 1896 policy_none.go:49] "None policy: Start" Jun 25 14:27:15.283000 audit[1942]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:15.283000 audit[1942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffded02770 a2=0 a3=1 items=0 ppid=1896 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:15.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:27:15.284692 kubelet[1896]: I0625 14:27:15.284571 1896 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 14:27:15.284692 kubelet[1896]: I0625 14:27:15.284613 1896 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:27:15.290727 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 14:27:15.307948 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 14:27:15.310587 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 14:27:15.321100 kubelet[1896]: I0625 14:27:15.321068 1896 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:27:15.321408 kubelet[1896]: I0625 14:27:15.321387 1896 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:27:15.322091 kubelet[1896]: E0625 14:27:15.322073 1896 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 14:27:15.361272 kubelet[1896]: I0625 14:27:15.361221 1896 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 14:27:15.361711 kubelet[1896]: E0625 14:27:15.361690 1896 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jun 25 14:27:15.379814 kubelet[1896]: I0625 14:27:15.379768 1896 topology_manager.go:215] "Topology Admit Handler" podUID="bc4ca8e2a5446d595902efa4c09c1216" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 14:27:15.381045 kubelet[1896]: I0625 14:27:15.381022 1896 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 14:27:15.381992 kubelet[1896]: I0625 14:27:15.381971 1896 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 14:27:15.392337 systemd[1]: Created slice kubepods-burstable-podbc4ca8e2a5446d595902efa4c09c1216.slice - libcontainer container kubepods-burstable-podbc4ca8e2a5446d595902efa4c09c1216.slice. Jun 25 14:27:15.411361 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jun 25 14:27:15.415558 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jun 25 14:27:15.461418 kubelet[1896]: E0625 14:27:15.460702 1896 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" Jun 25 14:27:15.561116 kubelet[1896]: I0625 14:27:15.561079 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc4ca8e2a5446d595902efa4c09c1216-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc4ca8e2a5446d595902efa4c09c1216\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:27:15.561442 kubelet[1896]: I0625 14:27:15.561424 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:27:15.561564 kubelet[1896]: I0625 14:27:15.561552 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:27:15.561670 kubelet[1896]: I0625 14:27:15.561657 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:27:15.561771 kubelet[1896]: I0625 14:27:15.561760 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc4ca8e2a5446d595902efa4c09c1216-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc4ca8e2a5446d595902efa4c09c1216\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:27:15.562503 kubelet[1896]: I0625 14:27:15.562481 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc4ca8e2a5446d595902efa4c09c1216-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bc4ca8e2a5446d595902efa4c09c1216\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:27:15.562712 kubelet[1896]: I0625 14:27:15.562702 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:27:15.563030 kubelet[1896]: I0625 14:27:15.563018 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:27:15.563171 kubelet[1896]: I0625 14:27:15.563159 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 14:27:15.563257 kubelet[1896]: I0625 14:27:15.562904 1896 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 14:27:15.563666 kubelet[1896]: E0625 14:27:15.563649 1896 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jun 25 14:27:15.711615 kubelet[1896]: E0625 14:27:15.711496 1896 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:15.712777 containerd[1244]: time="2024-06-25T14:27:15.712714842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bc4ca8e2a5446d595902efa4c09c1216,Namespace:kube-system,Attempt:0,}" Jun 25 14:27:15.713774 kubelet[1896]: E0625 14:27:15.713756 1896 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:15.714221 containerd[1244]: time="2024-06-25T14:27:15.714174925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jun 25 14:27:15.717514 kubelet[1896]: E0625 14:27:15.717491 1896 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:15.718167 containerd[1244]: time="2024-06-25T14:27:15.717975332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jun 25 14:27:15.862730 kubelet[1896]: E0625 14:27:15.862685 1896 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" Jun 25 14:27:15.965448 kubelet[1896]: I0625 14:27:15.965308 1896 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 14:27:15.965825 kubelet[1896]: E0625 14:27:15.965800 1896 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jun 25 14:27:16.182257 kubelet[1896]: W0625 14:27:16.182187 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:16.182257 kubelet[1896]: E0625 14:27:16.182251 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:16.265989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3393101156.mount: Deactivated successfully. Jun 25 14:27:16.272937 containerd[1244]: time="2024-06-25T14:27:16.272885876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.273951 containerd[1244]: time="2024-06-25T14:27:16.273903718Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.274505 containerd[1244]: time="2024-06-25T14:27:16.274464279Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:27:16.275305 containerd[1244]: time="2024-06-25T14:27:16.275266520Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.276215 containerd[1244]: time="2024-06-25T14:27:16.276186082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jun 25 14:27:16.276634 containerd[1244]: time="2024-06-25T14:27:16.276594282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:27:16.277368 containerd[1244]: time="2024-06-25T14:27:16.277328964Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.278961 containerd[1244]: time="2024-06-25T14:27:16.278930246Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.282554 containerd[1244]: time="2024-06-25T14:27:16.282518693Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.283486 containerd[1244]: time="2024-06-25T14:27:16.283452854Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.284622 containerd[1244]: time="2024-06-25T14:27:16.284582816Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.287419 containerd[1244]: time="2024-06-25T14:27:16.287376261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.289155 containerd[1244]: time="2024-06-25T14:27:16.289113944Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.273221ms" Jun 25 14:27:16.289905 containerd[1244]: time="2024-06-25T14:27:16.289853505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.778413ms" Jun 25 14:27:16.290846 containerd[1244]: time="2024-06-25T14:27:16.290801067Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.526462ms" Jun 25 14:27:16.291736 containerd[1244]: time="2024-06-25T14:27:16.291687388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.292729 containerd[1244]: time="2024-06-25T14:27:16.292690110Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.293648 containerd[1244]: time="2024-06-25T14:27:16.293613672Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:27:16.382040 kubelet[1896]: W0625 14:27:16.381955 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:16.382040 kubelet[1896]: E0625 14:27:16.382016 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:16.477027 containerd[1244]: time="2024-06-25T14:27:16.476916786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:27:16.477027 containerd[1244]: time="2024-06-25T14:27:16.476992626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:16.477226 containerd[1244]: time="2024-06-25T14:27:16.477006866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:27:16.477294 containerd[1244]: time="2024-06-25T14:27:16.477215146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:16.482747 containerd[1244]: time="2024-06-25T14:27:16.482318235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:27:16.482747 containerd[1244]: time="2024-06-25T14:27:16.482392035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:16.482747 containerd[1244]: time="2024-06-25T14:27:16.482410635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:27:16.482747 containerd[1244]: time="2024-06-25T14:27:16.482424355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:16.483507 containerd[1244]: time="2024-06-25T14:27:16.483270317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:27:16.483507 containerd[1244]: time="2024-06-25T14:27:16.483350557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:16.483507 containerd[1244]: time="2024-06-25T14:27:16.483365557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:27:16.483507 containerd[1244]: time="2024-06-25T14:27:16.483375157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:16.496587 systemd[1]: Started cri-containerd-93d158db0853f7f53d098542c5ae933bf1d202373f6f87556847a6f6420ee036.scope - libcontainer container 93d158db0853f7f53d098542c5ae933bf1d202373f6f87556847a6f6420ee036. Jun 25 14:27:16.499410 systemd[1]: Started cri-containerd-3bf19d39f4d2849e73e58f3a8c342b49ffca35bb927f4084b78f7dfec06721b5.scope - libcontainer container 3bf19d39f4d2849e73e58f3a8c342b49ffca35bb927f4084b78f7dfec06721b5. Jun 25 14:27:16.503291 kubelet[1896]: W0625 14:27:16.503183 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:16.503291 kubelet[1896]: E0625 14:27:16.503231 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:16.504479 systemd[1]: Started cri-containerd-93b1494a96cd779e14946dd9899de14a286b7fea3d9e326f5d13bef9b7a2a5e4.scope - libcontainer container 93b1494a96cd779e14946dd9899de14a286b7fea3d9e326f5d13bef9b7a2a5e4. Jun 25 14:27:16.508000 audit: BPF prog-id=52 op=LOAD Jun 25 14:27:16.509000 audit: BPF prog-id=53 op=LOAD Jun 25 14:27:16.509000 audit[1994]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=1973 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933643135386462303835336637663533643039383534326335616539 Jun 25 14:27:16.509000 audit: BPF prog-id=54 op=LOAD Jun 25 14:27:16.509000 audit[1994]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=1973 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933643135386462303835336637663533643039383534326335616539 Jun 25 14:27:16.509000 audit: BPF prog-id=54 op=UNLOAD Jun 25 14:27:16.509000 audit: BPF prog-id=53 op=UNLOAD Jun 25 14:27:16.509000 audit: BPF prog-id=55 op=LOAD Jun 25 14:27:16.509000 audit[1994]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=1973 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933643135386462303835336637663533643039383534326335616539 Jun 25 14:27:16.512000 audit: BPF prog-id=56 op=LOAD Jun 25 14:27:16.513000 audit: BPF prog-id=57 op=LOAD Jun 25 14:27:16.513000 audit[2009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=1972 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362663139643339663464323834396537336535386633613863333432 Jun 25 14:27:16.513000 audit: BPF prog-id=58 op=LOAD Jun 25 14:27:16.513000 audit[2009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=1972 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362663139643339663464323834396537336535386633613863333432 Jun 25 14:27:16.513000 audit: BPF prog-id=58 op=UNLOAD Jun 25 14:27:16.513000 audit: BPF prog-id=57 op=UNLOAD Jun 25 14:27:16.513000 audit: BPF prog-id=59 op=LOAD Jun 25 14:27:16.513000 audit[2009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=1972 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362663139643339663464323834396537336535386633613863333432 Jun 25 14:27:16.514000 audit: BPF prog-id=60 op=LOAD Jun 25 14:27:16.514000 audit: BPF prog-id=61 op=LOAD Jun 25 14:27:16.514000 audit[2010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=1974 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933623134393461393663643737396531343934366464393839396465 Jun 25 14:27:16.514000 audit: BPF prog-id=62 op=LOAD Jun 25 14:27:16.514000 audit[2010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=1974 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933623134393461393663643737396531343934366464393839396465 Jun 25 14:27:16.514000 audit: BPF prog-id=62 op=UNLOAD Jun 25 14:27:16.514000 audit: BPF prog-id=61 op=UNLOAD Jun 25 14:27:16.514000 audit: BPF prog-id=63 op=LOAD Jun 25 14:27:16.514000 audit[2010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=1974 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933623134393461393663643737396531343934366464393839396465 Jun 25 14:27:16.536835 containerd[1244]: time="2024-06-25T14:27:16.536797448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bf19d39f4d2849e73e58f3a8c342b49ffca35bb927f4084b78f7dfec06721b5\"" Jun 25 14:27:16.538468 kubelet[1896]: E0625 14:27:16.538081 1896 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:16.540831 containerd[1244]: time="2024-06-25T14:27:16.540781455Z" level=info msg="CreateContainer within sandbox \"3bf19d39f4d2849e73e58f3a8c342b49ffca35bb927f4084b78f7dfec06721b5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 14:27:16.542768 containerd[1244]: time="2024-06-25T14:27:16.542727419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bc4ca8e2a5446d595902efa4c09c1216,Namespace:kube-system,Attempt:0,} returns sandbox id \"93d158db0853f7f53d098542c5ae933bf1d202373f6f87556847a6f6420ee036\"" Jun 25 14:27:16.543572 kubelet[1896]: E0625 14:27:16.543537 1896 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:16.546241 containerd[1244]: time="2024-06-25T14:27:16.546201385Z" level=info msg="CreateContainer within sandbox \"93d158db0853f7f53d098542c5ae933bf1d202373f6f87556847a6f6420ee036\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 14:27:16.546894 containerd[1244]: time="2024-06-25T14:27:16.546853226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"93b1494a96cd779e14946dd9899de14a286b7fea3d9e326f5d13bef9b7a2a5e4\"" Jun 25 14:27:16.548368 kubelet[1896]: E0625 14:27:16.548203 1896 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:16.550149 containerd[1244]: time="2024-06-25T14:27:16.550097631Z" level=info msg="CreateContainer within sandbox \"93b1494a96cd779e14946dd9899de14a286b7fea3d9e326f5d13bef9b7a2a5e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 14:27:16.558574 containerd[1244]: time="2024-06-25T14:27:16.558523846Z" level=info msg="CreateContainer within sandbox \"3bf19d39f4d2849e73e58f3a8c342b49ffca35bb927f4084b78f7dfec06721b5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1a6cd47615fc138106683fb09a7a250d2b30c4c542edc207b353b2334cf70962\"" Jun 25 14:27:16.559494 containerd[1244]: time="2024-06-25T14:27:16.559463887Z" level=info msg="StartContainer for \"1a6cd47615fc138106683fb09a7a250d2b30c4c542edc207b353b2334cf70962\"" Jun 25 14:27:16.563228 containerd[1244]: time="2024-06-25T14:27:16.563167374Z" level=info msg="CreateContainer within sandbox \"93d158db0853f7f53d098542c5ae933bf1d202373f6f87556847a6f6420ee036\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db3f8d489126902fd136cc60ac0350ea166061dd86cee090b2baa12fd9ebf260\"" Jun 25 14:27:16.564034 containerd[1244]: time="2024-06-25T14:27:16.564005935Z" level=info msg="StartContainer for \"db3f8d489126902fd136cc60ac0350ea166061dd86cee090b2baa12fd9ebf260\"" Jun 25 14:27:16.564258 containerd[1244]: time="2024-06-25T14:27:16.564225255Z" level=info msg="CreateContainer within sandbox \"93b1494a96cd779e14946dd9899de14a286b7fea3d9e326f5d13bef9b7a2a5e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"642fab73a2b6a0749950c0ad3bf98c24c432a8e74a5d85ed1989e86f7974075c\"" Jun 25 14:27:16.564713 containerd[1244]: time="2024-06-25T14:27:16.564679296Z" level=info msg="StartContainer for \"642fab73a2b6a0749950c0ad3bf98c24c432a8e74a5d85ed1989e86f7974075c\"" Jun 25 14:27:16.587511 systemd[1]: Started cri-containerd-1a6cd47615fc138106683fb09a7a250d2b30c4c542edc207b353b2334cf70962.scope - libcontainer container 1a6cd47615fc138106683fb09a7a250d2b30c4c542edc207b353b2334cf70962. Jun 25 14:27:16.588477 systemd[1]: Started cri-containerd-db3f8d489126902fd136cc60ac0350ea166061dd86cee090b2baa12fd9ebf260.scope - libcontainer container db3f8d489126902fd136cc60ac0350ea166061dd86cee090b2baa12fd9ebf260. Jun 25 14:27:16.591697 systemd[1]: Started cri-containerd-642fab73a2b6a0749950c0ad3bf98c24c432a8e74a5d85ed1989e86f7974075c.scope - libcontainer container 642fab73a2b6a0749950c0ad3bf98c24c432a8e74a5d85ed1989e86f7974075c. Jun 25 14:27:16.601000 audit: BPF prog-id=64 op=LOAD Jun 25 14:27:16.602000 audit: BPF prog-id=65 op=LOAD Jun 25 14:27:16.602000 audit[2103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=1973 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462336638643438393132363930326664313336636336306163303335 Jun 25 14:27:16.602000 audit: BPF prog-id=66 op=LOAD Jun 25 14:27:16.602000 audit[2103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=1973 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462336638643438393132363930326664313336636336306163303335 Jun 25 14:27:16.602000 audit: BPF prog-id=66 op=UNLOAD Jun 25 14:27:16.603000 audit: BPF prog-id=65 op=UNLOAD Jun 25 14:27:16.603000 audit: BPF prog-id=67 op=LOAD Jun 25 14:27:16.603000 audit: BPF prog-id=68 op=LOAD Jun 25 14:27:16.603000 audit[2103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=1973 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462336638643438393132363930326664313336636336306163303335 Jun 25 14:27:16.603000 audit: BPF prog-id=69 op=LOAD Jun 25 14:27:16.603000 audit[2102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=1972 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161366364343736313566633133383130363638336662303961376132 Jun 25 14:27:16.603000 audit: BPF prog-id=70 op=LOAD Jun 25 14:27:16.603000 audit[2102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=1972 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161366364343736313566633133383130363638336662303961376132 Jun 25 14:27:16.603000 audit: BPF prog-id=70 op=UNLOAD Jun 25 14:27:16.603000 audit: BPF prog-id=69 op=UNLOAD Jun 25 14:27:16.603000 audit: BPF prog-id=71 op=LOAD Jun 25 14:27:16.603000 audit[2102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=1972 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161366364343736313566633133383130363638336662303961376132 Jun 25 14:27:16.605000 audit: BPF prog-id=72 op=LOAD Jun 25 14:27:16.605000 audit: BPF prog-id=73 op=LOAD Jun 25 14:27:16.605000 audit[2104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=1974 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634326661623733613262366130373439393530633061643362663938 Jun 25 14:27:16.605000 audit: BPF prog-id=74 op=LOAD Jun 25 14:27:16.605000 audit[2104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=1974 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634326661623733613262366130373439393530633061643362663938 Jun 25 14:27:16.605000 audit: BPF prog-id=74 op=UNLOAD Jun 25 14:27:16.605000 audit: BPF prog-id=73 op=UNLOAD Jun 25 14:27:16.605000 audit: BPF prog-id=75 op=LOAD Jun 25 14:27:16.605000 audit[2104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=1974 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:16.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634326661623733613262366130373439393530633061643362663938 Jun 25 14:27:16.636804 containerd[1244]: time="2024-06-25T14:27:16.633278054Z" level=info msg="StartContainer for \"db3f8d489126902fd136cc60ac0350ea166061dd86cee090b2baa12fd9ebf260\" returns successfully" Jun 25 14:27:16.648533 containerd[1244]: time="2024-06-25T14:27:16.648473440Z" level=info msg="StartContainer for \"642fab73a2b6a0749950c0ad3bf98c24c432a8e74a5d85ed1989e86f7974075c\" returns successfully" Jun 25 14:27:16.648654 containerd[1244]: time="2024-06-25T14:27:16.648550160Z" level=info msg="StartContainer for \"1a6cd47615fc138106683fb09a7a250d2b30c4c542edc207b353b2334cf70962\" returns successfully" Jun 25 14:27:16.663869 kubelet[1896]: E0625 14:27:16.663816 1896 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="1.6s" Jun 25 14:27:16.743674 kubelet[1896]: W0625 14:27:16.743614 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:16.743674 kubelet[1896]: E0625 14:27:16.743679 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jun 25 14:27:16.766996 kubelet[1896]: I0625 14:27:16.766891 1896 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 14:27:16.767258 kubelet[1896]: E0625 14:27:16.767209 1896 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jun 25 14:27:17.285211 kubelet[1896]: E0625 14:27:17.285164 1896 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:17.286999 kubelet[1896]: E0625 14:27:17.286968 1896 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:17.287965 kubelet[1896]: E0625 14:27:17.287944 1896 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:18.202205 kernel: kauditd_printk_skb: 135 callbacks suppressed Jun 25 14:27:18.202315 kernel: audit: type=1400 audit(1719325638.192:328): avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6273 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.202366 kernel: audit: type=1400 audit(1719325638.192:329): avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.202385 kernel: audit: type=1300 audit(1719325638.192:329): arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=40001d9900 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:18.192000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6273 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.192000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.192000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=40001d9900 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:18.192000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:27:18.205011 kernel: audit: type=1327 audit(1719325638.192:329): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:27:18.192000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=7 a1=40007f2720 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:18.209637 kernel: audit: type=1300 audit(1719325638.192:328): arch=c00000b7 syscall=27 success=no exit=-13 a0=7 a1=40007f2720 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:18.192000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:27:18.212363 kernel: audit: type=1327 audit(1719325638.192:328): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:27:18.289285 kubelet[1896]: E0625 14:27:18.289202 1896 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:18.363000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.363000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=40 a1=4005c165e0 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:27:18.368861 kubelet[1896]: I0625 14:27:18.368840 1896 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 14:27:18.369838 kernel: audit: type=1400 audit(1719325638.363:330): avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.369889 kernel: audit: type=1300 audit(1719325638.363:330): arch=c00000b7 syscall=27 success=no exit=-13 a0=40 a1=4005c165e0 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:27:18.369907 kernel: audit: type=1327 audit(1719325638.363:330): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:27:18.363000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:27:18.372304 kernel: audit: type=1400 audit(1719325638.363:331): avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6273 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.363000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6273 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.363000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=41 a1=4009b5c630 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:27:18.363000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:27:18.363000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.363000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=40 a1=40068c81b0 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:27:18.363000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:27:18.369000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6275 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.369000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=48 a1=400678ec60 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:27:18.369000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:27:18.392000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.392000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=4e a1=4002a7ff80 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:27:18.392000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:27:18.392000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6273 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:18.392000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=4e a1=4006604900 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:27:18.392000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:27:18.406895 kubelet[1896]: E0625 14:27:18.406853 1896 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 14:27:18.512020 kubelet[1896]: I0625 14:27:18.511899 1896 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 14:27:18.547258 kubelet[1896]: E0625 14:27:18.547211 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 14:27:18.648461 kubelet[1896]: E0625 14:27:18.648364 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 14:27:18.749531 kubelet[1896]: E0625 14:27:18.749487 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 14:27:18.850699 kubelet[1896]: E0625 14:27:18.850549 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 14:27:18.951521 kubelet[1896]: E0625 14:27:18.951478 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 14:27:19.251385 kubelet[1896]: I0625 14:27:19.251324 1896 apiserver.go:52] "Watching apiserver" Jun 25 14:27:19.260831 kubelet[1896]: I0625 14:27:19.260775 1896 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:27:20.477320 kubelet[1896]: E0625 14:27:20.477280 1896 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:20.996957 systemd[1]: Reloading. Jun 25 14:27:21.122918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:27:21.179000 audit: BPF prog-id=76 op=LOAD Jun 25 14:27:21.179000 audit: BPF prog-id=56 op=UNLOAD Jun 25 14:27:21.180000 audit: BPF prog-id=77 op=LOAD Jun 25 14:27:21.180000 audit: BPF prog-id=38 op=UNLOAD Jun 25 14:27:21.181000 audit: BPF prog-id=78 op=LOAD Jun 25 14:27:21.181000 audit: BPF prog-id=52 op=UNLOAD Jun 25 14:27:21.181000 audit: BPF prog-id=79 op=LOAD Jun 25 14:27:21.181000 audit: BPF prog-id=64 op=UNLOAD Jun 25 14:27:21.182000 audit: BPF prog-id=80 op=LOAD Jun 25 14:27:21.182000 audit: BPF prog-id=39 op=UNLOAD Jun 25 14:27:21.182000 audit: BPF prog-id=81 op=LOAD Jun 25 14:27:21.182000 audit: BPF prog-id=82 op=LOAD Jun 25 14:27:21.183000 audit: BPF prog-id=40 op=UNLOAD Jun 25 14:27:21.183000 audit: BPF prog-id=41 op=UNLOAD Jun 25 14:27:21.184000 audit: BPF prog-id=83 op=LOAD Jun 25 14:27:21.184000 audit: BPF prog-id=42 op=UNLOAD Jun 25 14:27:21.184000 audit: BPF prog-id=84 op=LOAD Jun 25 14:27:21.184000 audit: BPF prog-id=85 op=LOAD Jun 25 14:27:21.184000 audit: BPF prog-id=43 op=UNLOAD Jun 25 14:27:21.184000 audit: BPF prog-id=44 op=UNLOAD Jun 25 14:27:21.184000 audit: BPF prog-id=86 op=LOAD Jun 25 14:27:21.184000 audit: BPF prog-id=45 op=UNLOAD Jun 25 14:27:21.185000 audit: BPF prog-id=87 op=LOAD Jun 25 14:27:21.185000 audit: BPF prog-id=46 op=UNLOAD Jun 25 14:27:21.186000 audit: BPF prog-id=88 op=LOAD Jun 25 14:27:21.186000 audit: BPF prog-id=89 op=LOAD Jun 25 14:27:21.186000 audit: BPF prog-id=47 op=UNLOAD Jun 25 14:27:21.186000 audit: BPF prog-id=48 op=UNLOAD Jun 25 14:27:21.186000 audit: BPF prog-id=90 op=LOAD Jun 25 14:27:21.186000 audit: BPF prog-id=72 op=UNLOAD Jun 25 14:27:21.187000 audit: BPF prog-id=91 op=LOAD Jun 25 14:27:21.187000 audit: BPF prog-id=60 op=UNLOAD Jun 25 14:27:21.189000 audit: BPF prog-id=92 op=LOAD Jun 25 14:27:21.189000 audit: BPF prog-id=49 op=UNLOAD Jun 25 14:27:21.189000 audit: BPF prog-id=93 op=LOAD Jun 25 14:27:21.189000 audit: BPF prog-id=94 op=LOAD Jun 25 14:27:21.189000 audit: BPF prog-id=50 op=UNLOAD Jun 25 14:27:21.189000 audit: BPF prog-id=51 op=UNLOAD Jun 25 14:27:21.190000 audit: BPF prog-id=95 op=LOAD Jun 25 14:27:21.190000 audit: BPF prog-id=67 op=UNLOAD Jun 25 14:27:21.203908 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:27:21.227665 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:27:21.227910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:27:21.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:21.227987 systemd[1]: kubelet.service: Consumed 1.764s CPU time. Jun 25 14:27:21.237921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:27:21.325534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:27:21.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:21.389156 kubelet[2256]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:27:21.389156 kubelet[2256]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:27:21.389156 kubelet[2256]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:27:21.389156 kubelet[2256]: I0625 14:27:21.387582 2256 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:27:21.391905 kubelet[2256]: I0625 14:27:21.391877 2256 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 14:27:21.392038 kubelet[2256]: I0625 14:27:21.392026 2256 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:27:21.392287 kubelet[2256]: I0625 14:27:21.392267 2256 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 14:27:21.394281 kubelet[2256]: I0625 14:27:21.394262 2256 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 14:27:21.395679 kubelet[2256]: I0625 14:27:21.395653 2256 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:27:21.403284 kubelet[2256]: W0625 14:27:21.403266 2256 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 14:27:21.404087 kubelet[2256]: I0625 14:27:21.404072 2256 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:27:21.404285 kubelet[2256]: I0625 14:27:21.404275 2256 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:27:21.404537 kubelet[2256]: I0625 14:27:21.404460 2256 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:27:21.404537 kubelet[2256]: I0625 14:27:21.404494 2256 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:27:21.404537 kubelet[2256]: I0625 14:27:21.404502 2256 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:27:21.404537 kubelet[2256]: I0625 14:27:21.404537 2256 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:27:21.404713 kubelet[2256]: I0625 14:27:21.404619 2256 kubelet.go:393] "Attempting to sync node with API server" Jun 25 14:27:21.404713 kubelet[2256]: I0625 14:27:21.404633 2256 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:27:21.404713 kubelet[2256]: I0625 14:27:21.404654 2256 kubelet.go:309] "Adding apiserver pod source" Jun 25 14:27:21.404713 kubelet[2256]: I0625 14:27:21.404664 2256 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:27:21.405118 kubelet[2256]: I0625 14:27:21.405100 2256 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:27:21.405722 kubelet[2256]: I0625 14:27:21.405699 2256 server.go:1232] "Started kubelet" Jun 25 14:27:21.407280 kubelet[2256]: I0625 14:27:21.407261 2256 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:27:21.408907 kubelet[2256]: E0625 14:27:21.408884 2256 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 14:27:21.409022 kubelet[2256]: E0625 14:27:21.409008 2256 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:27:21.410218 kubelet[2256]: I0625 14:27:21.410197 2256 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:27:21.410281 kubelet[2256]: I0625 14:27:21.410276 2256 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:27:21.410436 kubelet[2256]: I0625 14:27:21.410415 2256 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:27:21.410573 kubelet[2256]: I0625 14:27:21.410553 2256 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:27:21.411283 kubelet[2256]: I0625 14:27:21.411243 2256 server.go:462] "Adding debug handlers to kubelet server" Jun 25 14:27:21.412159 kubelet[2256]: I0625 14:27:21.412130 2256 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 14:27:21.412309 kubelet[2256]: I0625 14:27:21.412284 2256 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:27:21.425638 kubelet[2256]: I0625 14:27:21.425616 2256 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:27:21.426613 kubelet[2256]: I0625 14:27:21.426596 2256 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:27:21.426697 kubelet[2256]: I0625 14:27:21.426687 2256 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:27:21.426771 kubelet[2256]: I0625 14:27:21.426761 2256 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 14:27:21.426882 kubelet[2256]: E0625 14:27:21.426872 2256 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:27:21.489702 kubelet[2256]: I0625 14:27:21.489671 2256 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:27:21.489702 kubelet[2256]: I0625 14:27:21.489699 2256 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:27:21.489852 kubelet[2256]: I0625 14:27:21.489716 2256 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:27:21.489880 kubelet[2256]: I0625 14:27:21.489863 2256 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 14:27:21.489909 kubelet[2256]: I0625 14:27:21.489883 2256 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 14:27:21.489909 kubelet[2256]: I0625 14:27:21.489890 2256 policy_none.go:49] "None policy: Start" Jun 25 14:27:21.490495 kubelet[2256]: I0625 14:27:21.490477 2256 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 14:27:21.490553 kubelet[2256]: I0625 14:27:21.490504 2256 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:27:21.490659 kubelet[2256]: I0625 14:27:21.490644 2256 state_mem.go:75] "Updated machine memory state" Jun 25 14:27:21.495283 kubelet[2256]: I0625 14:27:21.495261 2256 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:27:21.495968 kubelet[2256]: I0625 14:27:21.495941 2256 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:27:21.513672 kubelet[2256]: I0625 14:27:21.513644 2256 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 14:27:21.520092 kubelet[2256]: I0625 14:27:21.520047 2256 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jun 25 14:27:21.520196 kubelet[2256]: I0625 14:27:21.520117 2256 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 14:27:21.527409 kubelet[2256]: I0625 14:27:21.527382 2256 topology_manager.go:215] "Topology Admit Handler" podUID="bc4ca8e2a5446d595902efa4c09c1216" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 14:27:21.527624 kubelet[2256]: I0625 14:27:21.527606 2256 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 14:27:21.527744 kubelet[2256]: I0625 14:27:21.527729 2256 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 14:27:21.536163 kubelet[2256]: E0625 14:27:21.534883 2256 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 14:27:21.711447 kubelet[2256]: I0625 14:27:21.711405 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc4ca8e2a5446d595902efa4c09c1216-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc4ca8e2a5446d595902efa4c09c1216\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:27:21.711447 kubelet[2256]: I0625 14:27:21.711452 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:27:21.711598 kubelet[2256]: I0625 14:27:21.711475 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:27:21.711598 kubelet[2256]: I0625 14:27:21.711494 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc4ca8e2a5446d595902efa4c09c1216-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc4ca8e2a5446d595902efa4c09c1216\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:27:21.711598 kubelet[2256]: I0625 14:27:21.711516 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc4ca8e2a5446d595902efa4c09c1216-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bc4ca8e2a5446d595902efa4c09c1216\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:27:21.711598 kubelet[2256]: I0625 14:27:21.711535 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:27:21.711598 kubelet[2256]: I0625 14:27:21.711554 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:27:21.711770 kubelet[2256]: I0625 14:27:21.711574 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:27:21.711770 kubelet[2256]: I0625 14:27:21.711594 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 14:27:21.834074 kubelet[2256]: E0625 14:27:21.834025 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:21.834970 kubelet[2256]: E0625 14:27:21.834940 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:21.835610 kubelet[2256]: E0625 14:27:21.835588 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:22.409211 kubelet[2256]: I0625 14:27:22.409163 2256 apiserver.go:52] "Watching apiserver" Jun 25 14:27:22.453792 kubelet[2256]: E0625 14:27:22.453747 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:22.454144 kubelet[2256]: E0625 14:27:22.454120 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:22.454255 kubelet[2256]: E0625 14:27:22.454239 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:22.504495 kubelet[2256]: I0625 14:27:22.504442 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5043774829999998 podCreationTimestamp="2024-06-25 14:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:27:22.504323443 +0000 UTC m=+1.174487180" watchObservedRunningTime="2024-06-25 14:27:22.504377483 +0000 UTC m=+1.174541220" Jun 25 14:27:22.511257 kubelet[2256]: I0625 14:27:22.511201 2256 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:27:22.524225 kubelet[2256]: I0625 14:27:22.524181 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.524141506 podCreationTimestamp="2024-06-25 14:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:27:22.514385015 +0000 UTC m=+1.184548752" watchObservedRunningTime="2024-06-25 14:27:22.524141506 +0000 UTC m=+1.194305203" Jun 25 14:27:22.542165 kubelet[2256]: I0625 14:27:22.542116 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.542072967 podCreationTimestamp="2024-06-25 14:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:27:22.524581427 +0000 UTC m=+1.194745164" watchObservedRunningTime="2024-06-25 14:27:22.542072967 +0000 UTC m=+1.212236704" Jun 25 14:27:23.455972 kubelet[2256]: E0625 14:27:23.455935 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:26.393362 kubelet[2256]: E0625 14:27:26.393326 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:26.463381 kubelet[2256]: E0625 14:27:26.460432 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:26.487764 sudo[1388]: pam_unix(sudo:session): session closed for user root Jun 25 14:27:26.488739 kernel: kauditd_printk_skb: 56 callbacks suppressed Jun 25 14:27:26.488898 kernel: audit: type=1106 audit(1719325646.486:378): pid=1388 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:27:26.486000 audit[1388]: USER_END pid=1388 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:27:26.486000 audit[1388]: CRED_DISP pid=1388 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:27:26.493097 kernel: audit: type=1104 audit(1719325646.486:379): pid=1388 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:27:26.493317 sshd[1385]: pam_unix(sshd:session): session closed for user core Jun 25 14:27:26.493000 audit[1385]: USER_END pid=1385 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:26.493000 audit[1385]: CRED_DISP pid=1385 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:26.497635 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:57318.service: Deactivated successfully. Jun 25 14:27:26.498382 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 14:27:26.498576 systemd[1]: session-7.scope: Consumed 7.188s CPU time. Jun 25 14:27:26.499035 systemd-logind[1231]: Session 7 logged out. Waiting for processes to exit. Jun 25 14:27:26.499678 kernel: audit: type=1106 audit(1719325646.493:380): pid=1385 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:26.499722 kernel: audit: type=1104 audit(1719325646.493:381): pid=1385 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:26.499756 kernel: audit: type=1131 audit(1719325646.496:382): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:57318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:26.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:57318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:26.499918 systemd-logind[1231]: Removed session 7. Jun 25 14:27:27.918989 kubelet[2256]: E0625 14:27:27.918903 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:28.258922 kubelet[2256]: E0625 14:27:28.258831 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:28.462807 kubelet[2256]: E0625 14:27:28.462776 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:28.463549 kubelet[2256]: E0625 14:27:28.463532 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:29.464296 kubelet[2256]: E0625 14:27:29.464263 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:30.652000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:30.652000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=40008c2020 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:30.660737 kernel: audit: type=1400 audit(1719325650.652:383): avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:30.660836 kernel: audit: type=1300 audit(1719325650.652:383): arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=40008c2020 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:30.660858 kernel: audit: type=1327 audit(1719325650.652:383): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:27:30.652000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:27:30.653000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:30.666617 kernel: audit: type=1400 audit(1719325650.653:384): avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:30.666679 kernel: audit: type=1300 audit(1719325650.653:384): arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=4000e881c0 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:30.653000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=4000e881c0 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:30.653000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:27:30.654000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:30.654000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=40008c2280 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:30.654000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:27:30.654000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:27:30.654000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=4000e883c0 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:30.654000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:27:33.170000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=520979 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 14:27:33.172685 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 14:27:33.172772 kernel: audit: type=1400 audit(1719325653.170:387): avc: denied { watch } for pid=2132 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=520979 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 14:27:33.170000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=40007be180 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:33.178712 kernel: audit: type=1300 audit(1719325653.170:387): arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=40007be180 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:27:33.178791 kernel: audit: type=1327 audit(1719325653.170:387): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:27:33.170000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:27:34.402876 kubelet[2256]: I0625 14:27:34.402834 2256 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 14:27:34.403529 containerd[1244]: time="2024-06-25T14:27:34.403436755Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 14:27:34.403774 kubelet[2256]: I0625 14:27:34.403651 2256 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 14:27:35.140232 kubelet[2256]: I0625 14:27:35.140196 2256 topology_manager.go:215] "Topology Admit Handler" podUID="dbdf540f-aa63-4a47-b63d-6f74e372e0f1" podNamespace="kube-system" podName="kube-proxy-vc8xv" Jun 25 14:27:35.145645 systemd[1]: Created slice kubepods-besteffort-poddbdf540f_aa63_4a47_b63d_6f74e372e0f1.slice - libcontainer container kubepods-besteffort-poddbdf540f_aa63_4a47_b63d_6f74e372e0f1.slice. Jun 25 14:27:35.248582 kubelet[2256]: I0625 14:27:35.248519 2256 topology_manager.go:215] "Topology Admit Handler" podUID="922f4969-94de-4ae9-94b7-740175618999" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-t2xpv" Jun 25 14:27:35.253602 systemd[1]: Created slice kubepods-besteffort-pod922f4969_94de_4ae9_94b7_740175618999.slice - libcontainer container kubepods-besteffort-pod922f4969_94de_4ae9_94b7_740175618999.slice. Jun 25 14:27:35.307632 kubelet[2256]: I0625 14:27:35.307586 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbdf540f-aa63-4a47-b63d-6f74e372e0f1-lib-modules\") pod \"kube-proxy-vc8xv\" (UID: \"dbdf540f-aa63-4a47-b63d-6f74e372e0f1\") " pod="kube-system/kube-proxy-vc8xv" Jun 25 14:27:35.307632 kubelet[2256]: I0625 14:27:35.307636 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dbdf540f-aa63-4a47-b63d-6f74e372e0f1-kube-proxy\") pod \"kube-proxy-vc8xv\" (UID: \"dbdf540f-aa63-4a47-b63d-6f74e372e0f1\") " pod="kube-system/kube-proxy-vc8xv" Jun 25 14:27:35.307837 kubelet[2256]: I0625 14:27:35.307657 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbdf540f-aa63-4a47-b63d-6f74e372e0f1-xtables-lock\") pod \"kube-proxy-vc8xv\" (UID: \"dbdf540f-aa63-4a47-b63d-6f74e372e0f1\") " pod="kube-system/kube-proxy-vc8xv" Jun 25 14:27:35.307837 kubelet[2256]: I0625 14:27:35.307689 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzv7q\" (UniqueName: \"kubernetes.io/projected/dbdf540f-aa63-4a47-b63d-6f74e372e0f1-kube-api-access-rzv7q\") pod \"kube-proxy-vc8xv\" (UID: \"dbdf540f-aa63-4a47-b63d-6f74e372e0f1\") " pod="kube-system/kube-proxy-vc8xv" Jun 25 14:27:35.408645 kubelet[2256]: I0625 14:27:35.408525 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/922f4969-94de-4ae9-94b7-740175618999-var-lib-calico\") pod \"tigera-operator-76c4974c85-t2xpv\" (UID: \"922f4969-94de-4ae9-94b7-740175618999\") " pod="tigera-operator/tigera-operator-76c4974c85-t2xpv" Jun 25 14:27:35.408645 kubelet[2256]: I0625 14:27:35.408625 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8pgx\" (UniqueName: \"kubernetes.io/projected/922f4969-94de-4ae9-94b7-740175618999-kube-api-access-b8pgx\") pod \"tigera-operator-76c4974c85-t2xpv\" (UID: \"922f4969-94de-4ae9-94b7-740175618999\") " pod="tigera-operator/tigera-operator-76c4974c85-t2xpv" Jun 25 14:27:35.452506 kubelet[2256]: E0625 14:27:35.452474 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:35.453402 containerd[1244]: time="2024-06-25T14:27:35.453161904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vc8xv,Uid:dbdf540f-aa63-4a47-b63d-6f74e372e0f1,Namespace:kube-system,Attempt:0,}" Jun 25 14:27:35.473523 containerd[1244]: time="2024-06-25T14:27:35.473449754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:27:35.473523 containerd[1244]: time="2024-06-25T14:27:35.473497834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:35.473523 containerd[1244]: time="2024-06-25T14:27:35.473511994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:27:35.473523 containerd[1244]: time="2024-06-25T14:27:35.473522114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:35.501547 systemd[1]: Started cri-containerd-44fb1a02738caae92306b2466996aaf10cb28894b29a854d1ead005a39ab8f20.scope - libcontainer container 44fb1a02738caae92306b2466996aaf10cb28894b29a854d1ead005a39ab8f20. Jun 25 14:27:35.511000 audit: BPF prog-id=96 op=LOAD Jun 25 14:27:35.513367 kernel: audit: type=1334 audit(1719325655.511:388): prog-id=96 op=LOAD Jun 25 14:27:35.512000 audit: BPF prog-id=97 op=LOAD Jun 25 14:27:35.512000 audit[2363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001ab8b0 a2=78 a3=0 items=0 ppid=2353 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.517102 kernel: audit: type=1334 audit(1719325655.512:389): prog-id=97 op=LOAD Jun 25 14:27:35.517166 kernel: audit: type=1300 audit(1719325655.512:389): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001ab8b0 a2=78 a3=0 items=0 ppid=2353 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.517231 kernel: audit: type=1327 audit(1719325655.512:389): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434666231613032373338636161653932333036623234363639393661 Jun 25 14:27:35.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434666231613032373338636161653932333036623234363639393661 Jun 25 14:27:35.512000 audit: BPF prog-id=98 op=LOAD Jun 25 14:27:35.522979 kernel: audit: type=1334 audit(1719325655.512:390): prog-id=98 op=LOAD Jun 25 14:27:35.523040 kernel: audit: type=1300 audit(1719325655.512:390): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001ab640 a2=78 a3=0 items=0 ppid=2353 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.512000 audit[2363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001ab640 a2=78 a3=0 items=0 ppid=2353 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.523148 kernel: audit: type=1327 audit(1719325655.512:390): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434666231613032373338636161653932333036623234363639393661 Jun 25 14:27:35.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434666231613032373338636161653932333036623234363639393661 Jun 25 14:27:35.513000 audit: BPF prog-id=98 op=UNLOAD Jun 25 14:27:35.513000 audit: BPF prog-id=97 op=UNLOAD Jun 25 14:27:35.513000 audit: BPF prog-id=99 op=LOAD Jun 25 14:27:35.513000 audit[2363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001abb10 a2=78 a3=0 items=0 ppid=2353 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434666231613032373338636161653932333036623234363639393661 Jun 25 14:27:35.535266 containerd[1244]: time="2024-06-25T14:27:35.535228905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vc8xv,Uid:dbdf540f-aa63-4a47-b63d-6f74e372e0f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"44fb1a02738caae92306b2466996aaf10cb28894b29a854d1ead005a39ab8f20\"" Jun 25 14:27:35.536298 kubelet[2256]: E0625 14:27:35.536102 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:35.541540 containerd[1244]: time="2024-06-25T14:27:35.541502868Z" level=info msg="CreateContainer within sandbox \"44fb1a02738caae92306b2466996aaf10cb28894b29a854d1ead005a39ab8f20\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 14:27:35.552786 containerd[1244]: time="2024-06-25T14:27:35.552734194Z" level=info msg="CreateContainer within sandbox \"44fb1a02738caae92306b2466996aaf10cb28894b29a854d1ead005a39ab8f20\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bcbc54cd8c8cd062ef1278d2ddbdaba1164b038dac559d9340dd5030b98f42bb\"" Jun 25 14:27:35.553309 containerd[1244]: time="2024-06-25T14:27:35.553279674Z" level=info msg="StartContainer for \"bcbc54cd8c8cd062ef1278d2ddbdaba1164b038dac559d9340dd5030b98f42bb\"" Jun 25 14:27:35.556151 containerd[1244]: time="2024-06-25T14:27:35.556118035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-t2xpv,Uid:922f4969-94de-4ae9-94b7-740175618999,Namespace:tigera-operator,Attempt:0,}" Jun 25 14:27:35.577290 containerd[1244]: time="2024-06-25T14:27:35.577188846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:27:35.577290 containerd[1244]: time="2024-06-25T14:27:35.577261046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:35.577517 containerd[1244]: time="2024-06-25T14:27:35.577479846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:27:35.577609 containerd[1244]: time="2024-06-25T14:27:35.577504526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:35.584526 systemd[1]: Started cri-containerd-bcbc54cd8c8cd062ef1278d2ddbdaba1164b038dac559d9340dd5030b98f42bb.scope - libcontainer container bcbc54cd8c8cd062ef1278d2ddbdaba1164b038dac559d9340dd5030b98f42bb. Jun 25 14:27:35.595564 systemd[1]: Started cri-containerd-7a5eee98e468571475019158d338f968807e84e4e7ef84a859d75a1c3d5fdc40.scope - libcontainer container 7a5eee98e468571475019158d338f968807e84e4e7ef84a859d75a1c3d5fdc40. Jun 25 14:27:35.599000 audit: BPF prog-id=100 op=LOAD Jun 25 14:27:35.599000 audit[2402]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2353 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626335346364386338636430363265663132373864326464626461 Jun 25 14:27:35.599000 audit: BPF prog-id=101 op=LOAD Jun 25 14:27:35.599000 audit[2402]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2353 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626335346364386338636430363265663132373864326464626461 Jun 25 14:27:35.599000 audit: BPF prog-id=101 op=UNLOAD Jun 25 14:27:35.599000 audit: BPF prog-id=100 op=UNLOAD Jun 25 14:27:35.599000 audit: BPF prog-id=102 op=LOAD Jun 25 14:27:35.599000 audit[2402]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2353 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626335346364386338636430363265663132373864326464626461 Jun 25 14:27:35.605000 audit: BPF prog-id=103 op=LOAD Jun 25 14:27:35.605000 audit: BPF prog-id=104 op=LOAD Jun 25 14:27:35.605000 audit[2422]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2403 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761356565653938653436383537313437353031393135386433333866 Jun 25 14:27:35.606000 audit: BPF prog-id=105 op=LOAD Jun 25 14:27:35.606000 audit[2422]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2403 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761356565653938653436383537313437353031393135386433333866 Jun 25 14:27:35.608000 audit: BPF prog-id=105 op=UNLOAD Jun 25 14:27:35.608000 audit: BPF prog-id=104 op=UNLOAD Jun 25 14:27:35.609000 audit: BPF prog-id=106 op=LOAD Jun 25 14:27:35.609000 audit[2422]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2403 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761356565653938653436383537313437353031393135386433333866 Jun 25 14:27:35.634248 containerd[1244]: time="2024-06-25T14:27:35.634199675Z" level=info msg="StartContainer for \"bcbc54cd8c8cd062ef1278d2ddbdaba1164b038dac559d9340dd5030b98f42bb\" returns successfully" Jun 25 14:27:35.637165 containerd[1244]: time="2024-06-25T14:27:35.637128956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-t2xpv,Uid:922f4969-94de-4ae9-94b7-740175618999,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7a5eee98e468571475019158d338f968807e84e4e7ef84a859d75a1c3d5fdc40\"" Jun 25 14:27:35.639144 containerd[1244]: time="2024-06-25T14:27:35.639116757Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 14:27:35.792000 audit[2488]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:35.792000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca26b450 a2=0 a3=1 items=0 ppid=2430 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:27:35.792000 audit[2489]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.792000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf253400 a2=0 a3=1 items=0 ppid=2430 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.792000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:27:35.793000 audit[2490]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:35.793000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc13aa620 a2=0 a3=1 items=0 ppid=2430 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:27:35.795000 audit[2491]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.795000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdca15780 a2=0 a3=1 items=0 ppid=2430 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:27:35.796000 audit[2492]: NETFILTER_CFG table=filter:42 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:35.796000 audit[2492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2597ba0 a2=0 a3=1 items=0 ppid=2430 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.796000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:27:35.796000 audit[2493]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.796000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2cd6380 a2=0 a3=1 items=0 ppid=2430 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:27:35.896000 audit[2494]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.896000 audit[2494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffa694cc0 a2=0 a3=1 items=0 ppid=2430 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:27:35.901000 audit[2496]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.901000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd3e5fc00 a2=0 a3=1 items=0 ppid=2430 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.901000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 14:27:35.907000 audit[2499]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.907000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffddbda8f0 a2=0 a3=1 items=0 ppid=2430 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.907000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 14:27:35.908000 audit[2500]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.908000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff34703f0 a2=0 a3=1 items=0 ppid=2430 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.908000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:27:35.911000 audit[2502]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.911000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcea72750 a2=0 a3=1 items=0 ppid=2430 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:27:35.912000 audit[2503]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.912000 audit[2503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd80b1ab0 a2=0 a3=1 items=0 ppid=2430 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:27:35.914000 audit[2505]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.914000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcaeb3970 a2=0 a3=1 items=0 ppid=2430 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:27:35.921000 audit[2508]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.921000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff99e6c30 a2=0 a3=1 items=0 ppid=2430 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.921000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 14:27:35.922000 audit[2509]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.922000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdabf68f0 a2=0 a3=1 items=0 ppid=2430 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.922000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:27:35.925000 audit[2511]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.925000 audit[2511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe3021ab0 a2=0 a3=1 items=0 ppid=2430 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:27:35.926000 audit[2512]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.926000 audit[2512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd360fc70 a2=0 a3=1 items=0 ppid=2430 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.926000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:27:35.928000 audit[2514]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.928000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2c888e0 a2=0 a3=1 items=0 ppid=2430 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.928000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:27:35.932000 audit[2517]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.932000 audit[2517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe2866d20 a2=0 a3=1 items=0 ppid=2430 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:27:35.936000 audit[2520]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.936000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc6c132c0 a2=0 a3=1 items=0 ppid=2430 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:27:35.937000 audit[2521]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.937000 audit[2521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcbee0aa0 a2=0 a3=1 items=0 ppid=2430 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:27:35.940000 audit[2523]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.940000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff58a0770 a2=0 a3=1 items=0 ppid=2430 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:27:35.943000 audit[2526]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.943000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcac9fbd0 a2=0 a3=1 items=0 ppid=2430 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:27:35.945000 audit[2527]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.945000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff46dd6b0 a2=0 a3=1 items=0 ppid=2430 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.945000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:27:35.948000 audit[2529]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:27:35.948000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffcab56660 a2=0 a3=1 items=0 ppid=2430 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.948000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:27:35.970000 audit[2535]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:35.970000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffe8b4c690 a2=0 a3=1 items=0 ppid=2430 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:35.973000 audit[2535]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:35.973000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe8b4c690 a2=0 a3=1 items=0 ppid=2430 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.973000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:35.978000 audit[2542]: NETFILTER_CFG table=filter:65 family=2 entries=14 op=nft_register_rule pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:35.978000 audit[2542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffdd4b0900 a2=0 a3=1 items=0 ppid=2430 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.978000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:35.979000 audit[2542]: NETFILTER_CFG table=nat:66 family=2 entries=12 op=nft_register_rule pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:35.979000 audit[2542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdd4b0900 a2=0 a3=1 items=0 ppid=2430 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:35.992000 audit[2543]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:35.992000 audit[2543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcd497610 a2=0 a3=1 items=0 ppid=2430 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.992000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:27:35.995000 audit[2545]: NETFILTER_CFG table=filter:68 family=10 entries=2 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:35.995000 audit[2545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffff943f50 a2=0 a3=1 items=0 ppid=2430 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:35.995000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 14:27:36.008000 audit[2548]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.008000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd757b000 a2=0 a3=1 items=0 ppid=2430 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 14:27:36.009000 audit[2549]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.009000 audit[2549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2526fb0 a2=0 a3=1 items=0 ppid=2430 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.009000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:27:36.012000 audit[2551]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.012000 audit[2551]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc4008bf0 a2=0 a3=1 items=0 ppid=2430 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:27:36.013000 audit[2552]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.013000 audit[2552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1296260 a2=0 a3=1 items=0 ppid=2430 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.013000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:27:36.017000 audit[2554]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.017000 audit[2554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff899cba0 a2=0 a3=1 items=0 ppid=2430 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.017000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 14:27:36.021000 audit[2557]: NETFILTER_CFG table=filter:74 family=10 entries=2 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.021000 audit[2557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe9265cc0 a2=0 a3=1 items=0 ppid=2430 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.021000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:27:36.023000 audit[2558]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.023000 audit[2558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6b40400 a2=0 a3=1 items=0 ppid=2430 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:27:36.025000 audit[2560]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.025000 audit[2560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc07609a0 a2=0 a3=1 items=0 ppid=2430 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:27:36.027000 audit[2561]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_chain pid=2561 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.027000 audit[2561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd83e6d00 a2=0 a3=1 items=0 ppid=2430 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.027000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:27:36.029000 audit[2563]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.029000 audit[2563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb86c600 a2=0 a3=1 items=0 ppid=2430 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.029000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:27:36.033000 audit[2566]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.033000 audit[2566]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff3cbbba0 a2=0 a3=1 items=0 ppid=2430 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:27:36.039000 audit[2569]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.039000 audit[2569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdffb6610 a2=0 a3=1 items=0 ppid=2430 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 14:27:36.040000 audit[2570]: NETFILTER_CFG table=nat:81 family=10 entries=1 op=nft_register_chain pid=2570 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.040000 audit[2570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe2459590 a2=0 a3=1 items=0 ppid=2430 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.040000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:27:36.043000 audit[2572]: NETFILTER_CFG table=nat:82 family=10 entries=2 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.043000 audit[2572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffdcd45420 a2=0 a3=1 items=0 ppid=2430 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.043000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:27:36.048000 audit[2575]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.048000 audit[2575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd681e2a0 a2=0 a3=1 items=0 ppid=2430 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.048000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:27:36.049000 audit[2576]: NETFILTER_CFG table=nat:84 family=10 entries=1 op=nft_register_chain pid=2576 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.049000 audit[2576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd429b620 a2=0 a3=1 items=0 ppid=2430 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.049000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:27:36.052000 audit[2578]: NETFILTER_CFG table=nat:85 family=10 entries=2 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.052000 audit[2578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffcb185be0 a2=0 a3=1 items=0 ppid=2430 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:27:36.054000 audit[2579]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.054000 audit[2579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3251490 a2=0 a3=1 items=0 ppid=2430 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.054000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:27:36.056000 audit[2581]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.056000 audit[2581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffced504b0 a2=0 a3=1 items=0 ppid=2430 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:27:36.059000 audit[2584]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:27:36.059000 audit[2584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdca1cd70 a2=0 a3=1 items=0 ppid=2430 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.059000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:27:36.063000 audit[2586]: NETFILTER_CFG table=filter:89 family=10 entries=3 op=nft_register_rule pid=2586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:27:36.063000 audit[2586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=fffff2493fc0 a2=0 a3=1 items=0 ppid=2430 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.063000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:36.063000 audit[2586]: NETFILTER_CFG table=nat:90 family=10 entries=7 op=nft_register_chain pid=2586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:27:36.063000 audit[2586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff2493fc0 a2=0 a3=1 items=0 ppid=2430 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.063000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:36.463811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount903597478.mount: Deactivated successfully. Jun 25 14:27:36.480448 kubelet[2256]: E0625 14:27:36.477506 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:36.801002 containerd[1244]: time="2024-06-25T14:27:36.800873916Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:36.801578 containerd[1244]: time="2024-06-25T14:27:36.801534557Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473618" Jun 25 14:27:36.802440 containerd[1244]: time="2024-06-25T14:27:36.802400797Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:36.803748 containerd[1244]: time="2024-06-25T14:27:36.803708838Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:36.805221 containerd[1244]: time="2024-06-25T14:27:36.805170038Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:36.806106 containerd[1244]: time="2024-06-25T14:27:36.806067079Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.166723162s" Jun 25 14:27:36.806147 containerd[1244]: time="2024-06-25T14:27:36.806112319Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 14:27:36.810320 containerd[1244]: time="2024-06-25T14:27:36.810286161Z" level=info msg="CreateContainer within sandbox \"7a5eee98e468571475019158d338f968807e84e4e7ef84a859d75a1c3d5fdc40\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 14:27:36.818891 containerd[1244]: time="2024-06-25T14:27:36.818828765Z" level=info msg="CreateContainer within sandbox \"7a5eee98e468571475019158d338f968807e84e4e7ef84a859d75a1c3d5fdc40\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e0c3bed07cfa7c0baefae624946f4178ddd430a1e4f1c059e96957011c1af9f1\"" Jun 25 14:27:36.820561 containerd[1244]: time="2024-06-25T14:27:36.819504365Z" level=info msg="StartContainer for \"e0c3bed07cfa7c0baefae624946f4178ddd430a1e4f1c059e96957011c1af9f1\"" Jun 25 14:27:36.844537 systemd[1]: Started cri-containerd-e0c3bed07cfa7c0baefae624946f4178ddd430a1e4f1c059e96957011c1af9f1.scope - libcontainer container e0c3bed07cfa7c0baefae624946f4178ddd430a1e4f1c059e96957011c1af9f1. Jun 25 14:27:36.851000 audit: BPF prog-id=107 op=LOAD Jun 25 14:27:36.852000 audit: BPF prog-id=108 op=LOAD Jun 25 14:27:36.852000 audit[2603]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2403 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633362656430376366613763306261656661653632343934366634 Jun 25 14:27:36.852000 audit: BPF prog-id=109 op=LOAD Jun 25 14:27:36.852000 audit[2603]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2403 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633362656430376366613763306261656661653632343934366634 Jun 25 14:27:36.852000 audit: BPF prog-id=109 op=UNLOAD Jun 25 14:27:36.852000 audit: BPF prog-id=108 op=UNLOAD Jun 25 14:27:36.852000 audit: BPF prog-id=110 op=LOAD Jun 25 14:27:36.852000 audit[2603]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2403 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:36.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633362656430376366613763306261656661653632343934366634 Jun 25 14:27:36.872138 containerd[1244]: time="2024-06-25T14:27:36.872079350Z" level=info msg="StartContainer for \"e0c3bed07cfa7c0baefae624946f4178ddd430a1e4f1c059e96957011c1af9f1\" returns successfully" Jun 25 14:27:37.504901 kubelet[2256]: I0625 14:27:37.504859 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vc8xv" podStartSLOduration=2.504821274 podCreationTimestamp="2024-06-25 14:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:27:36.487528609 +0000 UTC m=+15.157692346" watchObservedRunningTime="2024-06-25 14:27:37.504821274 +0000 UTC m=+16.174985011" Jun 25 14:27:37.507868 kubelet[2256]: I0625 14:27:37.507581 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-t2xpv" podStartSLOduration=1.336900432 podCreationTimestamp="2024-06-25 14:27:35 +0000 UTC" firstStartedPulling="2024-06-25 14:27:35.638166677 +0000 UTC m=+14.308330414" lastFinishedPulling="2024-06-25 14:27:36.80880116 +0000 UTC m=+15.478964897" observedRunningTime="2024-06-25 14:27:37.504155993 +0000 UTC m=+16.174319730" watchObservedRunningTime="2024-06-25 14:27:37.507534915 +0000 UTC m=+16.177698652" Jun 25 14:27:41.350000 audit[2636]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:41.354350 kernel: kauditd_printk_skb: 199 callbacks suppressed Jun 25 14:27:41.354442 kernel: audit: type=1325 audit(1719325661.350:464): table=filter:91 family=2 entries=15 op=nft_register_rule pid=2636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:41.354485 kernel: audit: type=1300 audit(1719325661.350:464): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff0d80820 a2=0 a3=1 items=0 ppid=2430 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.350000 audit[2636]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff0d80820 a2=0 a3=1 items=0 ppid=2430 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.350000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:41.358336 kernel: audit: type=1327 audit(1719325661.350:464): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:41.351000 audit[2636]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:41.351000 audit[2636]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff0d80820 a2=0 a3=1 items=0 ppid=2430 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.366507 kernel: audit: type=1325 audit(1719325661.351:465): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:41.366579 kernel: audit: type=1300 audit(1719325661.351:465): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff0d80820 a2=0 a3=1 items=0 ppid=2430 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.366602 kernel: audit: type=1327 audit(1719325661.351:465): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:41.351000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:41.365000 audit[2638]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:41.365000 audit[2638]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc21ecce0 a2=0 a3=1 items=0 ppid=2430 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.377680 kernel: audit: type=1325 audit(1719325661.365:466): table=filter:93 family=2 entries=16 op=nft_register_rule pid=2638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:41.377732 kernel: audit: type=1300 audit(1719325661.365:466): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc21ecce0 a2=0 a3=1 items=0 ppid=2430 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.377753 kernel: audit: type=1327 audit(1719325661.365:466): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:41.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:41.372000 audit[2638]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:41.372000 audit[2638]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc21ecce0 a2=0 a3=1 items=0 ppid=2430 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:41.386367 kernel: audit: type=1325 audit(1719325661.372:467): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:41.483382 kubelet[2256]: I0625 14:27:41.483333 2256 topology_manager.go:215] "Topology Admit Handler" podUID="6d971f98-928c-42ca-80dd-81e6fbeb0250" podNamespace="calico-system" podName="calico-typha-6d95557676-qxrnx" Jun 25 14:27:41.490237 systemd[1]: Created slice kubepods-besteffort-pod6d971f98_928c_42ca_80dd_81e6fbeb0250.slice - libcontainer container kubepods-besteffort-pod6d971f98_928c_42ca_80dd_81e6fbeb0250.slice. Jun 25 14:27:41.533311 kubelet[2256]: I0625 14:27:41.533266 2256 topology_manager.go:215] "Topology Admit Handler" podUID="2b607b7c-5a45-4942-be61-b62ed9e26393" podNamespace="calico-system" podName="calico-node-p8lzk" Jun 25 14:27:41.541737 systemd[1]: Created slice kubepods-besteffort-pod2b607b7c_5a45_4942_be61_b62ed9e26393.slice - libcontainer container kubepods-besteffort-pod2b607b7c_5a45_4942_be61_b62ed9e26393.slice. Jun 25 14:27:41.637637 kubelet[2256]: I0625 14:27:41.637512 2256 topology_manager.go:215] "Topology Admit Handler" podUID="eca06206-c460-41f2-8686-c513e245df74" podNamespace="calico-system" podName="csi-node-driver-jm8qv" Jun 25 14:27:41.637822 kubelet[2256]: E0625 14:27:41.637790 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm8qv" podUID="eca06206-c460-41f2-8686-c513e245df74" Jun 25 14:27:41.653080 kubelet[2256]: I0625 14:27:41.653034 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbrhz\" (UniqueName: \"kubernetes.io/projected/eca06206-c460-41f2-8686-c513e245df74-kube-api-access-mbrhz\") pod \"csi-node-driver-jm8qv\" (UID: \"eca06206-c460-41f2-8686-c513e245df74\") " pod="calico-system/csi-node-driver-jm8qv" Jun 25 14:27:41.653080 kubelet[2256]: I0625 14:27:41.653083 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh54j\" (UniqueName: \"kubernetes.io/projected/6d971f98-928c-42ca-80dd-81e6fbeb0250-kube-api-access-qh54j\") pod \"calico-typha-6d95557676-qxrnx\" (UID: \"6d971f98-928c-42ca-80dd-81e6fbeb0250\") " pod="calico-system/calico-typha-6d95557676-qxrnx" Jun 25 14:27:41.653308 kubelet[2256]: I0625 14:27:41.653106 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2b607b7c-5a45-4942-be61-b62ed9e26393-cni-bin-dir\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653308 kubelet[2256]: I0625 14:27:41.653127 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2b607b7c-5a45-4942-be61-b62ed9e26393-cni-net-dir\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653308 kubelet[2256]: I0625 14:27:41.653158 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2b607b7c-5a45-4942-be61-b62ed9e26393-cni-log-dir\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653308 kubelet[2256]: I0625 14:27:41.653180 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2b607b7c-5a45-4942-be61-b62ed9e26393-node-certs\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653308 kubelet[2256]: I0625 14:27:41.653199 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b607b7c-5a45-4942-be61-b62ed9e26393-var-lib-calico\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653449 kubelet[2256]: I0625 14:27:41.653219 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv5vm\" (UniqueName: \"kubernetes.io/projected/2b607b7c-5a45-4942-be61-b62ed9e26393-kube-api-access-zv5vm\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653449 kubelet[2256]: I0625 14:27:41.653268 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eca06206-c460-41f2-8686-c513e245df74-socket-dir\") pod \"csi-node-driver-jm8qv\" (UID: \"eca06206-c460-41f2-8686-c513e245df74\") " pod="calico-system/csi-node-driver-jm8qv" Jun 25 14:27:41.653449 kubelet[2256]: I0625 14:27:41.653309 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b607b7c-5a45-4942-be61-b62ed9e26393-xtables-lock\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653449 kubelet[2256]: I0625 14:27:41.653373 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2b607b7c-5a45-4942-be61-b62ed9e26393-policysync\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653449 kubelet[2256]: I0625 14:27:41.653401 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2b607b7c-5a45-4942-be61-b62ed9e26393-flexvol-driver-host\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653617 kubelet[2256]: I0625 14:27:41.653420 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eca06206-c460-41f2-8686-c513e245df74-varrun\") pod \"csi-node-driver-jm8qv\" (UID: \"eca06206-c460-41f2-8686-c513e245df74\") " pod="calico-system/csi-node-driver-jm8qv" Jun 25 14:27:41.653617 kubelet[2256]: I0625 14:27:41.653442 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2b607b7c-5a45-4942-be61-b62ed9e26393-var-run-calico\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653617 kubelet[2256]: I0625 14:27:41.653484 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b607b7c-5a45-4942-be61-b62ed9e26393-tigera-ca-bundle\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653617 kubelet[2256]: I0625 14:27:41.653511 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eca06206-c460-41f2-8686-c513e245df74-registration-dir\") pod \"csi-node-driver-jm8qv\" (UID: \"eca06206-c460-41f2-8686-c513e245df74\") " pod="calico-system/csi-node-driver-jm8qv" Jun 25 14:27:41.653617 kubelet[2256]: I0625 14:27:41.653546 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d971f98-928c-42ca-80dd-81e6fbeb0250-tigera-ca-bundle\") pod \"calico-typha-6d95557676-qxrnx\" (UID: \"6d971f98-928c-42ca-80dd-81e6fbeb0250\") " pod="calico-system/calico-typha-6d95557676-qxrnx" Jun 25 14:27:41.653920 kubelet[2256]: I0625 14:27:41.653571 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6d971f98-928c-42ca-80dd-81e6fbeb0250-typha-certs\") pod \"calico-typha-6d95557676-qxrnx\" (UID: \"6d971f98-928c-42ca-80dd-81e6fbeb0250\") " pod="calico-system/calico-typha-6d95557676-qxrnx" Jun 25 14:27:41.653920 kubelet[2256]: I0625 14:27:41.653593 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b607b7c-5a45-4942-be61-b62ed9e26393-lib-modules\") pod \"calico-node-p8lzk\" (UID: \"2b607b7c-5a45-4942-be61-b62ed9e26393\") " pod="calico-system/calico-node-p8lzk" Jun 25 14:27:41.653920 kubelet[2256]: I0625 14:27:41.653638 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eca06206-c460-41f2-8686-c513e245df74-kubelet-dir\") pod \"csi-node-driver-jm8qv\" (UID: \"eca06206-c460-41f2-8686-c513e245df74\") " pod="calico-system/csi-node-driver-jm8qv" Jun 25 14:27:41.763439 kubelet[2256]: E0625 14:27:41.763369 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:41.763439 kubelet[2256]: W0625 14:27:41.763391 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:41.764233 kubelet[2256]: E0625 14:27:41.764183 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:41.764233 kubelet[2256]: W0625 14:27:41.764230 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:41.765131 kubelet[2256]: E0625 14:27:41.764487 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:41.765131 kubelet[2256]: E0625 14:27:41.764487 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:41.772098 kubelet[2256]: E0625 14:27:41.772079 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:41.772284 kubelet[2256]: W0625 14:27:41.772267 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:41.772381 kubelet[2256]: E0625 14:27:41.772369 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:41.778406 kubelet[2256]: E0625 14:27:41.778379 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:41.778406 kubelet[2256]: W0625 14:27:41.778400 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:41.778527 kubelet[2256]: E0625 14:27:41.778421 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:41.784548 kubelet[2256]: E0625 14:27:41.784527 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:41.784548 kubelet[2256]: W0625 14:27:41.784542 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:41.784670 kubelet[2256]: E0625 14:27:41.784559 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:41.794974 kubelet[2256]: E0625 14:27:41.794933 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:41.798305 containerd[1244]: time="2024-06-25T14:27:41.798247813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d95557676-qxrnx,Uid:6d971f98-928c-42ca-80dd-81e6fbeb0250,Namespace:calico-system,Attempt:0,}" Jun 25 14:27:41.837263 containerd[1244]: time="2024-06-25T14:27:41.836841346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:27:41.837263 containerd[1244]: time="2024-06-25T14:27:41.837240266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:41.837452 containerd[1244]: time="2024-06-25T14:27:41.837268906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:27:41.837452 containerd[1244]: time="2024-06-25T14:27:41.837282186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:41.845405 kubelet[2256]: E0625 14:27:41.845378 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:41.850044 containerd[1244]: time="2024-06-25T14:27:41.850001030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p8lzk,Uid:2b607b7c-5a45-4942-be61-b62ed9e26393,Namespace:calico-system,Attempt:0,}" Jun 25 14:27:41.861528 systemd[1]: Started cri-containerd-bd67c7b02c25faacb6733254bed7a9eb58933b5472d4213dee2ab4f8f0e93fcd.scope - libcontainer container bd67c7b02c25faacb6733254bed7a9eb58933b5472d4213dee2ab4f8f0e93fcd. Jun 25 14:27:41.873999 containerd[1244]: time="2024-06-25T14:27:41.873901278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:27:41.874456 containerd[1244]: time="2024-06-25T14:27:41.874078518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:41.874456 containerd[1244]: time="2024-06-25T14:27:41.874107838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:27:41.874456 containerd[1244]: time="2024-06-25T14:27:41.874122398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:27:41.882000 audit: BPF prog-id=111 op=LOAD Jun 25 14:27:41.883000 audit: BPF prog-id=112 op=LOAD Jun 25 14:27:41.883000 audit[2669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2659 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264363763376230326332356661616362363733333235346265643761 Jun 25 14:27:41.883000 audit: BPF prog-id=113 op=LOAD Jun 25 14:27:41.883000 audit[2669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2659 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264363763376230326332356661616362363733333235346265643761 Jun 25 14:27:41.883000 audit: BPF prog-id=113 op=UNLOAD Jun 25 14:27:41.883000 audit: BPF prog-id=112 op=UNLOAD Jun 25 14:27:41.883000 audit: BPF prog-id=114 op=LOAD Jun 25 14:27:41.883000 audit[2669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2659 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264363763376230326332356661616362363733333235346265643761 Jun 25 14:27:41.904526 systemd[1]: Started cri-containerd-ae9d1983d7273ee14c832b7d4952de7e55f16381a5c89d89a968004e9d45df06.scope - libcontainer container ae9d1983d7273ee14c832b7d4952de7e55f16381a5c89d89a968004e9d45df06. Jun 25 14:27:41.915000 audit: BPF prog-id=115 op=LOAD Jun 25 14:27:41.917000 audit: BPF prog-id=116 op=LOAD Jun 25 14:27:41.917000 audit[2705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2693 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165396431393833643732373365653134633833326237643439353264 Jun 25 14:27:41.917000 audit: BPF prog-id=117 op=LOAD Jun 25 14:27:41.917000 audit[2705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2693 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165396431393833643732373365653134633833326237643439353264 Jun 25 14:27:41.917000 audit: BPF prog-id=117 op=UNLOAD Jun 25 14:27:41.917000 audit: BPF prog-id=116 op=UNLOAD Jun 25 14:27:41.917000 audit: BPF prog-id=118 op=LOAD Jun 25 14:27:41.917000 audit[2705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2693 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:41.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165396431393833643732373365653134633833326237643439353264 Jun 25 14:27:41.919579 containerd[1244]: time="2024-06-25T14:27:41.919543574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d95557676-qxrnx,Uid:6d971f98-928c-42ca-80dd-81e6fbeb0250,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd67c7b02c25faacb6733254bed7a9eb58933b5472d4213dee2ab4f8f0e93fcd\"" Jun 25 14:27:41.920257 kubelet[2256]: E0625 14:27:41.920230 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:41.923064 containerd[1244]: time="2024-06-25T14:27:41.922511375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 14:27:41.936200 containerd[1244]: time="2024-06-25T14:27:41.936156540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p8lzk,Uid:2b607b7c-5a45-4942-be61-b62ed9e26393,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae9d1983d7273ee14c832b7d4952de7e55f16381a5c89d89a968004e9d45df06\"" Jun 25 14:27:41.937045 kubelet[2256]: E0625 14:27:41.936858 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:42.389000 audit[2736]: NETFILTER_CFG table=filter:95 family=2 entries=16 op=nft_register_rule pid=2736 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:42.389000 audit[2736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe55f3ed0 a2=0 a3=1 items=0 ppid=2430 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:42.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:42.389000 audit[2736]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2736 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:42.389000 audit[2736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe55f3ed0 a2=0 a3=1 items=0 ppid=2430 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:42.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:43.429575 kubelet[2256]: E0625 14:27:43.429477 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm8qv" podUID="eca06206-c460-41f2-8686-c513e245df74" Jun 25 14:27:43.927295 containerd[1244]: time="2024-06-25T14:27:43.927239160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:43.927903 containerd[1244]: time="2024-06-25T14:27:43.927803600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 14:27:43.929254 containerd[1244]: time="2024-06-25T14:27:43.929211601Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:43.930688 containerd[1244]: time="2024-06-25T14:27:43.930655361Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:43.932201 containerd[1244]: time="2024-06-25T14:27:43.932165521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:43.933374 containerd[1244]: time="2024-06-25T14:27:43.933315002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.010763147s" Jun 25 14:27:43.933495 containerd[1244]: time="2024-06-25T14:27:43.933473402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 14:27:43.935131 containerd[1244]: time="2024-06-25T14:27:43.935101002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 14:27:43.952407 containerd[1244]: time="2024-06-25T14:27:43.952357088Z" level=info msg="CreateContainer within sandbox \"bd67c7b02c25faacb6733254bed7a9eb58933b5472d4213dee2ab4f8f0e93fcd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 14:27:43.962613 containerd[1244]: time="2024-06-25T14:27:43.962566891Z" level=info msg="CreateContainer within sandbox \"bd67c7b02c25faacb6733254bed7a9eb58933b5472d4213dee2ab4f8f0e93fcd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dd65e5028c3f439547479428dab2a4bc24a2f8c489216a98a3adfc42db0ac9f0\"" Jun 25 14:27:43.964397 containerd[1244]: time="2024-06-25T14:27:43.964073211Z" level=info msg="StartContainer for \"dd65e5028c3f439547479428dab2a4bc24a2f8c489216a98a3adfc42db0ac9f0\"" Jun 25 14:27:43.997527 systemd[1]: Started cri-containerd-dd65e5028c3f439547479428dab2a4bc24a2f8c489216a98a3adfc42db0ac9f0.scope - libcontainer container dd65e5028c3f439547479428dab2a4bc24a2f8c489216a98a3adfc42db0ac9f0. Jun 25 14:27:44.012000 audit: BPF prog-id=119 op=LOAD Jun 25 14:27:44.012000 audit: BPF prog-id=120 op=LOAD Jun 25 14:27:44.012000 audit[2751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=2659 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:44.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464363565353032386333663433393534373437393432386461623261 Jun 25 14:27:44.012000 audit: BPF prog-id=121 op=LOAD Jun 25 14:27:44.012000 audit[2751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=2659 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:44.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464363565353032386333663433393534373437393432386461623261 Jun 25 14:27:44.012000 audit: BPF prog-id=121 op=UNLOAD Jun 25 14:27:44.013000 audit: BPF prog-id=120 op=UNLOAD Jun 25 14:27:44.013000 audit: BPF prog-id=122 op=LOAD Jun 25 14:27:44.013000 audit[2751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=2659 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:44.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464363565353032386333663433393534373437393432386461623261 Jun 25 14:27:44.039574 containerd[1244]: time="2024-06-25T14:27:44.039513913Z" level=info msg="StartContainer for \"dd65e5028c3f439547479428dab2a4bc24a2f8c489216a98a3adfc42db0ac9f0\" returns successfully" Jun 25 14:27:44.577952 kubelet[2256]: E0625 14:27:44.577922 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:44.582575 kubelet[2256]: E0625 14:27:44.582554 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.582708 kubelet[2256]: W0625 14:27:44.582681 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.582779 kubelet[2256]: E0625 14:27:44.582768 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.583015 kubelet[2256]: E0625 14:27:44.583002 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.583096 kubelet[2256]: W0625 14:27:44.583083 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.583159 kubelet[2256]: E0625 14:27:44.583148 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.583396 kubelet[2256]: E0625 14:27:44.583382 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.583483 kubelet[2256]: W0625 14:27:44.583469 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.583544 kubelet[2256]: E0625 14:27:44.583535 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.584421 kubelet[2256]: E0625 14:27:44.584398 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.584520 kubelet[2256]: W0625 14:27:44.584507 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.584582 kubelet[2256]: E0625 14:27:44.584573 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.585189 kubelet[2256]: E0625 14:27:44.585174 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.585296 kubelet[2256]: W0625 14:27:44.585281 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.585376 kubelet[2256]: E0625 14:27:44.585365 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.587820 kubelet[2256]: E0625 14:27:44.587435 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.587820 kubelet[2256]: W0625 14:27:44.587449 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.587820 kubelet[2256]: E0625 14:27:44.587465 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.587820 kubelet[2256]: E0625 14:27:44.587694 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.587820 kubelet[2256]: W0625 14:27:44.587706 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.587820 kubelet[2256]: E0625 14:27:44.587718 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.587996 kubelet[2256]: E0625 14:27:44.587895 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.587996 kubelet[2256]: W0625 14:27:44.587904 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.587996 kubelet[2256]: E0625 14:27:44.587915 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.588104 kubelet[2256]: E0625 14:27:44.588083 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.588104 kubelet[2256]: W0625 14:27:44.588095 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.588162 kubelet[2256]: E0625 14:27:44.588107 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.588393 kubelet[2256]: E0625 14:27:44.588364 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.588393 kubelet[2256]: W0625 14:27:44.588379 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.588393 kubelet[2256]: E0625 14:27:44.588392 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.588583 kubelet[2256]: E0625 14:27:44.588564 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.588583 kubelet[2256]: W0625 14:27:44.588577 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.588639 kubelet[2256]: E0625 14:27:44.588589 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.588751 kubelet[2256]: E0625 14:27:44.588734 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.588751 kubelet[2256]: W0625 14:27:44.588745 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.588809 kubelet[2256]: E0625 14:27:44.588756 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.588952 kubelet[2256]: E0625 14:27:44.588935 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.588952 kubelet[2256]: W0625 14:27:44.588949 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.589098 kubelet[2256]: E0625 14:27:44.588964 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.589158 kubelet[2256]: E0625 14:27:44.589136 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.589158 kubelet[2256]: W0625 14:27:44.589150 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.589213 kubelet[2256]: E0625 14:27:44.589162 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.589325 kubelet[2256]: E0625 14:27:44.589312 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.589325 kubelet[2256]: W0625 14:27:44.589322 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.589407 kubelet[2256]: E0625 14:27:44.589334 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.600380 kubelet[2256]: I0625 14:27:44.598540 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6d95557676-qxrnx" podStartSLOduration=1.585533002 podCreationTimestamp="2024-06-25 14:27:41 +0000 UTC" firstStartedPulling="2024-06-25 14:27:41.920765734 +0000 UTC m=+20.590929471" lastFinishedPulling="2024-06-25 14:27:43.933742162 +0000 UTC m=+22.603905899" observedRunningTime="2024-06-25 14:27:44.59828487 +0000 UTC m=+23.268448607" watchObservedRunningTime="2024-06-25 14:27:44.59850943 +0000 UTC m=+23.268673167" Jun 25 14:27:44.672936 kubelet[2256]: E0625 14:27:44.672912 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.672936 kubelet[2256]: W0625 14:27:44.672930 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.673112 kubelet[2256]: E0625 14:27:44.672953 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.673186 kubelet[2256]: E0625 14:27:44.673177 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.673218 kubelet[2256]: W0625 14:27:44.673186 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.673218 kubelet[2256]: E0625 14:27:44.673204 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.673536 kubelet[2256]: E0625 14:27:44.673505 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.673536 kubelet[2256]: W0625 14:27:44.673518 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.673536 kubelet[2256]: E0625 14:27:44.673535 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.673843 kubelet[2256]: E0625 14:27:44.673812 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.673843 kubelet[2256]: W0625 14:27:44.673823 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.673843 kubelet[2256]: E0625 14:27:44.673837 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.674002 kubelet[2256]: E0625 14:27:44.673990 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.674002 kubelet[2256]: W0625 14:27:44.673999 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.674071 kubelet[2256]: E0625 14:27:44.674010 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.674153 kubelet[2256]: E0625 14:27:44.674142 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.674153 kubelet[2256]: W0625 14:27:44.674150 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.674225 kubelet[2256]: E0625 14:27:44.674163 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.674341 kubelet[2256]: E0625 14:27:44.674324 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.674341 kubelet[2256]: W0625 14:27:44.674335 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.674423 kubelet[2256]: E0625 14:27:44.674357 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.674702 kubelet[2256]: E0625 14:27:44.674679 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.674784 kubelet[2256]: W0625 14:27:44.674770 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.674864 kubelet[2256]: E0625 14:27:44.674854 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.675025 kubelet[2256]: E0625 14:27:44.675008 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.675025 kubelet[2256]: W0625 14:27:44.675020 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.675086 kubelet[2256]: E0625 14:27:44.675036 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.675214 kubelet[2256]: E0625 14:27:44.675200 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.675214 kubelet[2256]: W0625 14:27:44.675213 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.675284 kubelet[2256]: E0625 14:27:44.675225 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.675393 kubelet[2256]: E0625 14:27:44.675379 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.675393 kubelet[2256]: W0625 14:27:44.675391 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.675459 kubelet[2256]: E0625 14:27:44.675402 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.675560 kubelet[2256]: E0625 14:27:44.675548 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.675560 kubelet[2256]: W0625 14:27:44.675559 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.675623 kubelet[2256]: E0625 14:27:44.675572 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.675932 kubelet[2256]: E0625 14:27:44.675919 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.676006 kubelet[2256]: W0625 14:27:44.675994 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.676065 kubelet[2256]: E0625 14:27:44.676056 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.676329 kubelet[2256]: E0625 14:27:44.676316 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.676553 kubelet[2256]: W0625 14:27:44.676535 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.676672 kubelet[2256]: E0625 14:27:44.676662 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.676932 kubelet[2256]: E0625 14:27:44.676918 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.677002 kubelet[2256]: W0625 14:27:44.676990 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.677068 kubelet[2256]: E0625 14:27:44.677059 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.677301 kubelet[2256]: E0625 14:27:44.677289 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.677388 kubelet[2256]: W0625 14:27:44.677376 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.677458 kubelet[2256]: E0625 14:27:44.677449 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.677717 kubelet[2256]: E0625 14:27:44.677699 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.677717 kubelet[2256]: W0625 14:27:44.677714 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.677816 kubelet[2256]: E0625 14:27:44.677733 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:44.677973 kubelet[2256]: E0625 14:27:44.677960 2256 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:27:44.677973 kubelet[2256]: W0625 14:27:44.677972 2256 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:27:44.678042 kubelet[2256]: E0625 14:27:44.677985 2256 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:27:45.045825 containerd[1244]: time="2024-06-25T14:27:45.045784115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:45.048754 containerd[1244]: time="2024-06-25T14:27:45.048707236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 14:27:45.049810 containerd[1244]: time="2024-06-25T14:27:45.049780196Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:45.051962 containerd[1244]: time="2024-06-25T14:27:45.051927317Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:45.052995 containerd[1244]: time="2024-06-25T14:27:45.052971037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:45.054537 containerd[1244]: time="2024-06-25T14:27:45.054495238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.119354756s" Jun 25 14:27:45.054647 containerd[1244]: time="2024-06-25T14:27:45.054627398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 14:27:45.064496 containerd[1244]: time="2024-06-25T14:27:45.058355679Z" level=info msg="CreateContainer within sandbox \"ae9d1983d7273ee14c832b7d4952de7e55f16381a5c89d89a968004e9d45df06\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 14:27:45.103308 containerd[1244]: time="2024-06-25T14:27:45.103252171Z" level=info msg="CreateContainer within sandbox \"ae9d1983d7273ee14c832b7d4952de7e55f16381a5c89d89a968004e9d45df06\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9f763858477a4f7a1a6f444bcc71c44a4f786cbc3603cb9dcc615b8ceebaecfd\"" Jun 25 14:27:45.103931 containerd[1244]: time="2024-06-25T14:27:45.103901931Z" level=info msg="StartContainer for \"9f763858477a4f7a1a6f444bcc71c44a4f786cbc3603cb9dcc615b8ceebaecfd\"" Jun 25 14:27:45.133609 systemd[1]: Started cri-containerd-9f763858477a4f7a1a6f444bcc71c44a4f786cbc3603cb9dcc615b8ceebaecfd.scope - libcontainer container 9f763858477a4f7a1a6f444bcc71c44a4f786cbc3603cb9dcc615b8ceebaecfd. Jun 25 14:27:45.143000 audit: BPF prog-id=123 op=LOAD Jun 25 14:27:45.143000 audit[2830]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=2693 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:45.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966373633383538343737613466376131613666343434626363373163 Jun 25 14:27:45.143000 audit: BPF prog-id=124 op=LOAD Jun 25 14:27:45.143000 audit[2830]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=2693 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:45.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966373633383538343737613466376131613666343434626363373163 Jun 25 14:27:45.143000 audit: BPF prog-id=124 op=UNLOAD Jun 25 14:27:45.143000 audit: BPF prog-id=123 op=UNLOAD Jun 25 14:27:45.143000 audit: BPF prog-id=125 op=LOAD Jun 25 14:27:45.143000 audit[2830]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=2693 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:45.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966373633383538343737613466376131613666343434626363373163 Jun 25 14:27:45.155289 containerd[1244]: time="2024-06-25T14:27:45.155173424Z" level=info msg="StartContainer for \"9f763858477a4f7a1a6f444bcc71c44a4f786cbc3603cb9dcc615b8ceebaecfd\" returns successfully" Jun 25 14:27:45.170007 systemd[1]: cri-containerd-9f763858477a4f7a1a6f444bcc71c44a4f786cbc3603cb9dcc615b8ceebaecfd.scope: Deactivated successfully. Jun 25 14:27:45.173000 audit: BPF prog-id=125 op=UNLOAD Jun 25 14:27:45.209214 containerd[1244]: time="2024-06-25T14:27:45.209159118Z" level=info msg="shim disconnected" id=9f763858477a4f7a1a6f444bcc71c44a4f786cbc3603cb9dcc615b8ceebaecfd namespace=k8s.io Jun 25 14:27:45.209631 containerd[1244]: time="2024-06-25T14:27:45.209606399Z" level=warning msg="cleaning up after shim disconnected" id=9f763858477a4f7a1a6f444bcc71c44a4f786cbc3603cb9dcc615b8ceebaecfd namespace=k8s.io Jun 25 14:27:45.209714 containerd[1244]: time="2024-06-25T14:27:45.209698759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:27:45.427650 kubelet[2256]: E0625 14:27:45.427619 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm8qv" podUID="eca06206-c460-41f2-8686-c513e245df74" Jun 25 14:27:45.510114 kubelet[2256]: I0625 14:27:45.510083 2256 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:27:45.510768 kubelet[2256]: E0625 14:27:45.510750 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:45.511394 kubelet[2256]: E0625 14:27:45.511375 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:45.512391 containerd[1244]: time="2024-06-25T14:27:45.512332638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 14:27:45.945751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f763858477a4f7a1a6f444bcc71c44a4f786cbc3603cb9dcc615b8ceebaecfd-rootfs.mount: Deactivated successfully. Jun 25 14:27:47.427474 kubelet[2256]: E0625 14:27:47.427441 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm8qv" podUID="eca06206-c460-41f2-8686-c513e245df74" Jun 25 14:27:48.968315 containerd[1244]: time="2024-06-25T14:27:48.968272257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:48.969362 containerd[1244]: time="2024-06-25T14:27:48.969322457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 14:27:48.970081 containerd[1244]: time="2024-06-25T14:27:48.970057657Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:48.971642 containerd[1244]: time="2024-06-25T14:27:48.971600818Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:48.974632 containerd[1244]: time="2024-06-25T14:27:48.974596738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:48.976307 containerd[1244]: time="2024-06-25T14:27:48.976272539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 3.463870541s" Jun 25 14:27:48.976438 containerd[1244]: time="2024-06-25T14:27:48.976417419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 14:27:48.979980 containerd[1244]: time="2024-06-25T14:27:48.978388939Z" level=info msg="CreateContainer within sandbox \"ae9d1983d7273ee14c832b7d4952de7e55f16381a5c89d89a968004e9d45df06\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 14:27:48.990555 containerd[1244]: time="2024-06-25T14:27:48.990510422Z" level=info msg="CreateContainer within sandbox \"ae9d1983d7273ee14c832b7d4952de7e55f16381a5c89d89a968004e9d45df06\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c948286b26ece72a44a7ddd8c1c32a459516c349787efeb734c5d6bc65e74fb1\"" Jun 25 14:27:48.990958 containerd[1244]: time="2024-06-25T14:27:48.990918702Z" level=info msg="StartContainer for \"c948286b26ece72a44a7ddd8c1c32a459516c349787efeb734c5d6bc65e74fb1\"" Jun 25 14:27:49.022578 systemd[1]: Started cri-containerd-c948286b26ece72a44a7ddd8c1c32a459516c349787efeb734c5d6bc65e74fb1.scope - libcontainer container c948286b26ece72a44a7ddd8c1c32a459516c349787efeb734c5d6bc65e74fb1. Jun 25 14:27:49.031000 audit: BPF prog-id=126 op=LOAD Jun 25 14:27:49.033853 kernel: kauditd_printk_skb: 56 callbacks suppressed Jun 25 14:27:49.033909 kernel: audit: type=1334 audit(1719325669.031:494): prog-id=126 op=LOAD Jun 25 14:27:49.031000 audit[2904]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=2693 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:49.037260 kernel: audit: type=1300 audit(1719325669.031:494): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=2693 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:49.037338 kernel: audit: type=1327 audit(1719325669.031:494): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343832383662323665636537326134346137646464386331633332 Jun 25 14:27:49.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343832383662323665636537326134346137646464386331633332 Jun 25 14:27:49.031000 audit: BPF prog-id=127 op=LOAD Jun 25 14:27:49.040416 kernel: audit: type=1334 audit(1719325669.031:495): prog-id=127 op=LOAD Jun 25 14:27:49.040469 kernel: audit: type=1300 audit(1719325669.031:495): arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=2693 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:49.031000 audit[2904]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=2693 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:49.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343832383662323665636537326134346137646464386331633332 Jun 25 14:27:49.045734 kernel: audit: type=1327 audit(1719325669.031:495): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343832383662323665636537326134346137646464386331633332 Jun 25 14:27:49.032000 audit: BPF prog-id=127 op=UNLOAD Jun 25 14:27:49.047415 kernel: audit: type=1334 audit(1719325669.032:496): prog-id=127 op=UNLOAD Jun 25 14:27:49.047487 kernel: audit: type=1334 audit(1719325669.032:497): prog-id=126 op=UNLOAD Jun 25 14:27:49.047509 kernel: audit: type=1334 audit(1719325669.032:498): prog-id=128 op=LOAD Jun 25 14:27:49.032000 audit: BPF prog-id=126 op=UNLOAD Jun 25 14:27:49.032000 audit: BPF prog-id=128 op=LOAD Jun 25 14:27:49.032000 audit[2904]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=2693 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:49.050388 kernel: audit: type=1300 audit(1719325669.032:498): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=2693 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:49.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343832383662323665636537326134346137646464386331633332 Jun 25 14:27:49.063567 containerd[1244]: time="2024-06-25T14:27:49.063456517Z" level=info msg="StartContainer for \"c948286b26ece72a44a7ddd8c1c32a459516c349787efeb734c5d6bc65e74fb1\" returns successfully" Jun 25 14:27:49.428952 kubelet[2256]: E0625 14:27:49.428923 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm8qv" podUID="eca06206-c460-41f2-8686-c513e245df74" Jun 25 14:27:49.530602 kubelet[2256]: E0625 14:27:49.530240 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:49.612991 systemd[1]: cri-containerd-c948286b26ece72a44a7ddd8c1c32a459516c349787efeb734c5d6bc65e74fb1.scope: Deactivated successfully. Jun 25 14:27:49.616000 audit: BPF prog-id=128 op=UNLOAD Jun 25 14:27:49.647690 containerd[1244]: time="2024-06-25T14:27:49.647628276Z" level=info msg="shim disconnected" id=c948286b26ece72a44a7ddd8c1c32a459516c349787efeb734c5d6bc65e74fb1 namespace=k8s.io Jun 25 14:27:49.647690 containerd[1244]: time="2024-06-25T14:27:49.647686276Z" level=warning msg="cleaning up after shim disconnected" id=c948286b26ece72a44a7ddd8c1c32a459516c349787efeb734c5d6bc65e74fb1 namespace=k8s.io Jun 25 14:27:49.647886 containerd[1244]: time="2024-06-25T14:27:49.647704116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:27:49.663729 kubelet[2256]: I0625 14:27:49.663697 2256 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 14:27:49.683903 kubelet[2256]: I0625 14:27:49.681500 2256 topology_manager.go:215] "Topology Admit Handler" podUID="fbf7d824-77fc-4d35-a17d-65edab6216f5" podNamespace="kube-system" podName="coredns-5dd5756b68-758q6" Jun 25 14:27:49.692679 systemd[1]: Created slice kubepods-burstable-podfbf7d824_77fc_4d35_a17d_65edab6216f5.slice - libcontainer container kubepods-burstable-podfbf7d824_77fc_4d35_a17d_65edab6216f5.slice. Jun 25 14:27:49.694018 kubelet[2256]: I0625 14:27:49.693355 2256 topology_manager.go:215] "Topology Admit Handler" podUID="2dba4c0d-dc07-4d6e-a4e1-19d948f912fa" podNamespace="calico-system" podName="calico-kube-controllers-7857d6897f-22rg5" Jun 25 14:27:49.694018 kubelet[2256]: I0625 14:27:49.693613 2256 topology_manager.go:215] "Topology Admit Handler" podUID="e7c7496b-13ed-42c4-b1e1-6a2ce57313f1" podNamespace="kube-system" podName="coredns-5dd5756b68-68qx4" Jun 25 14:27:49.699112 systemd[1]: Created slice kubepods-besteffort-pod2dba4c0d_dc07_4d6e_a4e1_19d948f912fa.slice - libcontainer container kubepods-besteffort-pod2dba4c0d_dc07_4d6e_a4e1_19d948f912fa.slice. Jun 25 14:27:49.704139 systemd[1]: Created slice kubepods-burstable-pode7c7496b_13ed_42c4_b1e1_6a2ce57313f1.slice - libcontainer container kubepods-burstable-pode7c7496b_13ed_42c4_b1e1_6a2ce57313f1.slice. Jun 25 14:27:49.807531 kubelet[2256]: I0625 14:27:49.807480 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44ch\" (UniqueName: \"kubernetes.io/projected/fbf7d824-77fc-4d35-a17d-65edab6216f5-kube-api-access-x44ch\") pod \"coredns-5dd5756b68-758q6\" (UID: \"fbf7d824-77fc-4d35-a17d-65edab6216f5\") " pod="kube-system/coredns-5dd5756b68-758q6" Jun 25 14:27:49.807531 kubelet[2256]: I0625 14:27:49.807541 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7c7496b-13ed-42c4-b1e1-6a2ce57313f1-config-volume\") pod \"coredns-5dd5756b68-68qx4\" (UID: \"e7c7496b-13ed-42c4-b1e1-6a2ce57313f1\") " pod="kube-system/coredns-5dd5756b68-68qx4" Jun 25 14:27:49.807719 kubelet[2256]: I0625 14:27:49.807568 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dba4c0d-dc07-4d6e-a4e1-19d948f912fa-tigera-ca-bundle\") pod \"calico-kube-controllers-7857d6897f-22rg5\" (UID: \"2dba4c0d-dc07-4d6e-a4e1-19d948f912fa\") " pod="calico-system/calico-kube-controllers-7857d6897f-22rg5" Jun 25 14:27:49.807719 kubelet[2256]: I0625 14:27:49.807589 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbf7d824-77fc-4d35-a17d-65edab6216f5-config-volume\") pod \"coredns-5dd5756b68-758q6\" (UID: \"fbf7d824-77fc-4d35-a17d-65edab6216f5\") " pod="kube-system/coredns-5dd5756b68-758q6" Jun 25 14:27:49.807719 kubelet[2256]: I0625 14:27:49.807626 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tgfl\" (UniqueName: \"kubernetes.io/projected/2dba4c0d-dc07-4d6e-a4e1-19d948f912fa-kube-api-access-6tgfl\") pod \"calico-kube-controllers-7857d6897f-22rg5\" (UID: \"2dba4c0d-dc07-4d6e-a4e1-19d948f912fa\") " pod="calico-system/calico-kube-controllers-7857d6897f-22rg5" Jun 25 14:27:49.807719 kubelet[2256]: I0625 14:27:49.807651 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqst8\" (UniqueName: \"kubernetes.io/projected/e7c7496b-13ed-42c4-b1e1-6a2ce57313f1-kube-api-access-tqst8\") pod \"coredns-5dd5756b68-68qx4\" (UID: \"e7c7496b-13ed-42c4-b1e1-6a2ce57313f1\") " pod="kube-system/coredns-5dd5756b68-68qx4" Jun 25 14:27:49.989542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c948286b26ece72a44a7ddd8c1c32a459516c349787efeb734c5d6bc65e74fb1-rootfs.mount: Deactivated successfully. Jun 25 14:27:49.997194 kubelet[2256]: E0625 14:27:49.997162 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:49.997923 containerd[1244]: time="2024-06-25T14:27:49.997871267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-758q6,Uid:fbf7d824-77fc-4d35-a17d-65edab6216f5,Namespace:kube-system,Attempt:0,}" Jun 25 14:27:50.003968 containerd[1244]: time="2024-06-25T14:27:50.003910188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7857d6897f-22rg5,Uid:2dba4c0d-dc07-4d6e-a4e1-19d948f912fa,Namespace:calico-system,Attempt:0,}" Jun 25 14:27:50.011422 kubelet[2256]: E0625 14:27:50.011366 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:50.012131 containerd[1244]: time="2024-06-25T14:27:50.012094670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-68qx4,Uid:e7c7496b-13ed-42c4-b1e1-6a2ce57313f1,Namespace:kube-system,Attempt:0,}" Jun 25 14:27:50.333796 containerd[1244]: time="2024-06-25T14:27:50.333379131Z" level=error msg="Failed to destroy network for sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.333796 containerd[1244]: time="2024-06-25T14:27:50.333723891Z" level=error msg="encountered an error cleaning up failed sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.333796 containerd[1244]: time="2024-06-25T14:27:50.333773371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7857d6897f-22rg5,Uid:2dba4c0d-dc07-4d6e-a4e1-19d948f912fa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.336675 kubelet[2256]: E0625 14:27:50.336633 2256 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.336760 kubelet[2256]: E0625 14:27:50.336721 2256 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7857d6897f-22rg5" Jun 25 14:27:50.336760 kubelet[2256]: E0625 14:27:50.336743 2256 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7857d6897f-22rg5" Jun 25 14:27:50.336820 kubelet[2256]: E0625 14:27:50.336801 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7857d6897f-22rg5_calico-system(2dba4c0d-dc07-4d6e-a4e1-19d948f912fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7857d6897f-22rg5_calico-system(2dba4c0d-dc07-4d6e-a4e1-19d948f912fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7857d6897f-22rg5" podUID="2dba4c0d-dc07-4d6e-a4e1-19d948f912fa" Jun 25 14:27:50.338619 containerd[1244]: time="2024-06-25T14:27:50.338561252Z" level=error msg="Failed to destroy network for sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.338965 containerd[1244]: time="2024-06-25T14:27:50.338890532Z" level=error msg="encountered an error cleaning up failed sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.339022 containerd[1244]: time="2024-06-25T14:27:50.338985132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-758q6,Uid:fbf7d824-77fc-4d35-a17d-65edab6216f5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.339326 kubelet[2256]: E0625 14:27:50.339296 2256 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.339479 kubelet[2256]: E0625 14:27:50.339438 2256 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-758q6" Jun 25 14:27:50.339528 kubelet[2256]: E0625 14:27:50.339494 2256 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-758q6" Jun 25 14:27:50.339867 kubelet[2256]: E0625 14:27:50.339543 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-758q6_kube-system(fbf7d824-77fc-4d35-a17d-65edab6216f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-758q6_kube-system(fbf7d824-77fc-4d35-a17d-65edab6216f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-758q6" podUID="fbf7d824-77fc-4d35-a17d-65edab6216f5" Jun 25 14:27:50.347859 containerd[1244]: time="2024-06-25T14:27:50.347812374Z" level=error msg="Failed to destroy network for sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.348266 containerd[1244]: time="2024-06-25T14:27:50.348221094Z" level=error msg="encountered an error cleaning up failed sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.348417 containerd[1244]: time="2024-06-25T14:27:50.348385654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-68qx4,Uid:e7c7496b-13ed-42c4-b1e1-6a2ce57313f1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.348965 kubelet[2256]: E0625 14:27:50.348668 2256 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.348965 kubelet[2256]: E0625 14:27:50.348713 2256 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-68qx4" Jun 25 14:27:50.348965 kubelet[2256]: E0625 14:27:50.348731 2256 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-68qx4" Jun 25 14:27:50.349091 kubelet[2256]: E0625 14:27:50.348783 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-68qx4_kube-system(e7c7496b-13ed-42c4-b1e1-6a2ce57313f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-68qx4_kube-system(e7c7496b-13ed-42c4-b1e1-6a2ce57313f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-68qx4" podUID="e7c7496b-13ed-42c4-b1e1-6a2ce57313f1" Jun 25 14:27:50.532756 kubelet[2256]: I0625 14:27:50.532708 2256 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:27:50.536310 kubelet[2256]: I0625 14:27:50.536274 2256 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:27:50.537384 kubelet[2256]: I0625 14:27:50.537159 2256 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:27:50.541532 kubelet[2256]: E0625 14:27:50.541362 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:50.542848 containerd[1244]: time="2024-06-25T14:27:50.542809131Z" level=info msg="StopPodSandbox for \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\"" Jun 25 14:27:50.543127 containerd[1244]: time="2024-06-25T14:27:50.543100571Z" level=info msg="Ensure that sandbox fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64 in task-service has been cleanup successfully" Jun 25 14:27:50.544955 containerd[1244]: time="2024-06-25T14:27:50.544097452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 14:27:50.545719 containerd[1244]: time="2024-06-25T14:27:50.545682852Z" level=info msg="StopPodSandbox for \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\"" Jun 25 14:27:50.547099 containerd[1244]: time="2024-06-25T14:27:50.547071932Z" level=info msg="Ensure that sandbox f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828 in task-service has been cleanup successfully" Jun 25 14:27:50.548087 containerd[1244]: time="2024-06-25T14:27:50.547788772Z" level=info msg="StopPodSandbox for \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\"" Jun 25 14:27:50.548260 containerd[1244]: time="2024-06-25T14:27:50.548234172Z" level=info msg="Ensure that sandbox e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df in task-service has been cleanup successfully" Jun 25 14:27:50.574473 containerd[1244]: time="2024-06-25T14:27:50.574399337Z" level=error msg="StopPodSandbox for \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\" failed" error="failed to destroy network for sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.578240 kubelet[2256]: E0625 14:27:50.578199 2256 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:27:50.578319 kubelet[2256]: E0625 14:27:50.578304 2256 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828"} Jun 25 14:27:50.578376 kubelet[2256]: E0625 14:27:50.578352 2256 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbf7d824-77fc-4d35-a17d-65edab6216f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:27:50.578439 kubelet[2256]: E0625 14:27:50.578395 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbf7d824-77fc-4d35-a17d-65edab6216f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-758q6" podUID="fbf7d824-77fc-4d35-a17d-65edab6216f5" Jun 25 14:27:50.578597 containerd[1244]: time="2024-06-25T14:27:50.578426498Z" level=error msg="StopPodSandbox for \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\" failed" error="failed to destroy network for sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.578653 containerd[1244]: time="2024-06-25T14:27:50.578519138Z" level=error msg="StopPodSandbox for \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\" failed" error="failed to destroy network for sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:50.578841 kubelet[2256]: E0625 14:27:50.578819 2256 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:27:50.578879 kubelet[2256]: E0625 14:27:50.578858 2256 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df"} Jun 25 14:27:50.578902 kubelet[2256]: E0625 14:27:50.578890 2256 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7c7496b-13ed-42c4-b1e1-6a2ce57313f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:27:50.578939 kubelet[2256]: E0625 14:27:50.578913 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7c7496b-13ed-42c4-b1e1-6a2ce57313f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-68qx4" podUID="e7c7496b-13ed-42c4-b1e1-6a2ce57313f1" Jun 25 14:27:50.578981 kubelet[2256]: E0625 14:27:50.578948 2256 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:27:50.578981 kubelet[2256]: E0625 14:27:50.578961 2256 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64"} Jun 25 14:27:50.579030 kubelet[2256]: E0625 14:27:50.578986 2256 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2dba4c0d-dc07-4d6e-a4e1-19d948f912fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:27:50.579030 kubelet[2256]: E0625 14:27:50.579016 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2dba4c0d-dc07-4d6e-a4e1-19d948f912fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7857d6897f-22rg5" podUID="2dba4c0d-dc07-4d6e-a4e1-19d948f912fa" Jun 25 14:27:50.987870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64-shm.mount: Deactivated successfully. Jun 25 14:27:50.987947 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828-shm.mount: Deactivated successfully. Jun 25 14:27:51.432542 systemd[1]: Created slice kubepods-besteffort-podeca06206_c460_41f2_8686_c513e245df74.slice - libcontainer container kubepods-besteffort-podeca06206_c460_41f2_8686_c513e245df74.slice. Jun 25 14:27:51.434634 containerd[1244]: time="2024-06-25T14:27:51.434576096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm8qv,Uid:eca06206-c460-41f2-8686-c513e245df74,Namespace:calico-system,Attempt:0,}" Jun 25 14:27:51.490263 containerd[1244]: time="2024-06-25T14:27:51.490177026Z" level=error msg="Failed to destroy network for sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:51.490597 containerd[1244]: time="2024-06-25T14:27:51.490548107Z" level=error msg="encountered an error cleaning up failed sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:51.490640 containerd[1244]: time="2024-06-25T14:27:51.490608387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm8qv,Uid:eca06206-c460-41f2-8686-c513e245df74,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:51.491244 kubelet[2256]: E0625 14:27:51.490860 2256 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:51.491244 kubelet[2256]: E0625 14:27:51.490912 2256 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jm8qv" Jun 25 14:27:51.491244 kubelet[2256]: E0625 14:27:51.490934 2256 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jm8qv" Jun 25 14:27:51.491411 kubelet[2256]: E0625 14:27:51.490984 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jm8qv_calico-system(eca06206-c460-41f2-8686-c513e245df74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jm8qv_calico-system(eca06206-c460-41f2-8686-c513e245df74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jm8qv" podUID="eca06206-c460-41f2-8686-c513e245df74" Jun 25 14:27:51.492080 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e-shm.mount: Deactivated successfully. Jun 25 14:27:51.546956 kubelet[2256]: I0625 14:27:51.546428 2256 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:27:51.548640 containerd[1244]: time="2024-06-25T14:27:51.547181277Z" level=info msg="StopPodSandbox for \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\"" Jun 25 14:27:51.548640 containerd[1244]: time="2024-06-25T14:27:51.547465237Z" level=info msg="Ensure that sandbox c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e in task-service has been cleanup successfully" Jun 25 14:27:51.573679 containerd[1244]: time="2024-06-25T14:27:51.573595441Z" level=error msg="StopPodSandbox for \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\" failed" error="failed to destroy network for sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:27:51.573908 kubelet[2256]: E0625 14:27:51.573869 2256 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:27:51.573975 kubelet[2256]: E0625 14:27:51.573947 2256 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e"} Jun 25 14:27:51.574006 kubelet[2256]: E0625 14:27:51.573986 2256 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eca06206-c460-41f2-8686-c513e245df74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:27:51.574064 kubelet[2256]: E0625 14:27:51.574015 2256 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eca06206-c460-41f2-8686-c513e245df74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jm8qv" podUID="eca06206-c460-41f2-8686-c513e245df74" Jun 25 14:27:51.877156 kubelet[2256]: I0625 14:27:51.876379 2256 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:27:51.877156 kubelet[2256]: E0625 14:27:51.876980 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:51.904000 audit[3214]: NETFILTER_CFG table=filter:97 family=2 entries=15 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:51.904000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffc96c9880 a2=0 a3=1 items=0 ppid=2430 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:51.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:51.906000 audit[3214]: NETFILTER_CFG table=nat:98 family=2 entries=19 op=nft_register_chain pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:27:51.906000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc96c9880 a2=0 a3=1 items=0 ppid=2430 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:51.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:27:52.548855 kubelet[2256]: E0625 14:27:52.548827 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:53.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:41114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:53.204255 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:41114.service - OpenSSH per-connection server daemon (10.0.0.1:41114). Jun 25 14:27:53.240000 audit[3220]: USER_ACCT pid=3220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:53.242531 sshd[3220]: Accepted publickey for core from 10.0.0.1 port 41114 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:27:53.242000 audit[3220]: CRED_ACQ pid=3220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:53.242000 audit[3220]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd690cfd0 a2=3 a3=1 items=0 ppid=1 pid=3220 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:53.242000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:27:53.243821 sshd[3220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:27:53.248322 systemd-logind[1231]: New session 8 of user core. Jun 25 14:27:53.258527 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 14:27:53.262000 audit[3220]: USER_START pid=3220 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:53.263000 audit[3222]: CRED_ACQ pid=3222 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:53.404551 sshd[3220]: pam_unix(sshd:session): session closed for user core Jun 25 14:27:53.404000 audit[3220]: USER_END pid=3220 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:53.404000 audit[3220]: CRED_DISP pid=3220 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:53.407172 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:41114.service: Deactivated successfully. Jun 25 14:27:53.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:41114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:53.407936 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 14:27:53.408557 systemd-logind[1231]: Session 8 logged out. Waiting for processes to exit. Jun 25 14:27:53.409568 systemd-logind[1231]: Removed session 8. Jun 25 14:27:54.316014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918984159.mount: Deactivated successfully. Jun 25 14:27:54.511588 containerd[1244]: time="2024-06-25T14:27:54.511542839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:54.513011 containerd[1244]: time="2024-06-25T14:27:54.512957999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 14:27:54.516286 containerd[1244]: time="2024-06-25T14:27:54.516064519Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:54.517617 containerd[1244]: time="2024-06-25T14:27:54.517588280Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:54.520762 containerd[1244]: time="2024-06-25T14:27:54.520720480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:27:54.521441 containerd[1244]: time="2024-06-25T14:27:54.521402960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.977265588s" Jun 25 14:27:54.521441 containerd[1244]: time="2024-06-25T14:27:54.521436880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 14:27:54.533718 containerd[1244]: time="2024-06-25T14:27:54.533688402Z" level=info msg="CreateContainer within sandbox \"ae9d1983d7273ee14c832b7d4952de7e55f16381a5c89d89a968004e9d45df06\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 14:27:54.550098 containerd[1244]: time="2024-06-25T14:27:54.550049964Z" level=info msg="CreateContainer within sandbox \"ae9d1983d7273ee14c832b7d4952de7e55f16381a5c89d89a968004e9d45df06\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e7f7bd0c3abcb1c6e55dc99f9c7794b53864efa94aeb7947cd197883e078f3e8\"" Jun 25 14:27:54.551618 containerd[1244]: time="2024-06-25T14:27:54.551585685Z" level=info msg="StartContainer for \"e7f7bd0c3abcb1c6e55dc99f9c7794b53864efa94aeb7947cd197883e078f3e8\"" Jun 25 14:27:54.605547 systemd[1]: Started cri-containerd-e7f7bd0c3abcb1c6e55dc99f9c7794b53864efa94aeb7947cd197883e078f3e8.scope - libcontainer container e7f7bd0c3abcb1c6e55dc99f9c7794b53864efa94aeb7947cd197883e078f3e8. Jun 25 14:27:54.619000 audit: BPF prog-id=129 op=LOAD Jun 25 14:27:54.621643 kernel: kauditd_printk_skb: 19 callbacks suppressed Jun 25 14:27:54.621832 kernel: audit: type=1334 audit(1719325674.619:511): prog-id=129 op=LOAD Jun 25 14:27:54.619000 audit[3244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2693 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:54.625397 kernel: audit: type=1300 audit(1719325674.619:511): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2693 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:54.625466 kernel: audit: type=1327 audit(1719325674.619:511): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537663762643063336162636231633665353564633939663963373739 Jun 25 14:27:54.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537663762643063336162636231633665353564633939663963373739 Jun 25 14:27:54.623000 audit: BPF prog-id=130 op=LOAD Jun 25 14:27:54.629030 kernel: audit: type=1334 audit(1719325674.623:512): prog-id=130 op=LOAD Jun 25 14:27:54.629079 kernel: audit: type=1300 audit(1719325674.623:512): arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2693 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:54.623000 audit[3244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2693 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:54.632367 kernel: audit: type=1327 audit(1719325674.623:512): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537663762643063336162636231633665353564633939663963373739 Jun 25 14:27:54.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537663762643063336162636231633665353564633939663963373739 Jun 25 14:27:54.624000 audit: BPF prog-id=130 op=UNLOAD Jun 25 14:27:54.624000 audit: BPF prog-id=129 op=UNLOAD Jun 25 14:27:54.636722 kernel: audit: type=1334 audit(1719325674.624:513): prog-id=130 op=UNLOAD Jun 25 14:27:54.636789 kernel: audit: type=1334 audit(1719325674.624:514): prog-id=129 op=UNLOAD Jun 25 14:27:54.624000 audit: BPF prog-id=131 op=LOAD Jun 25 14:27:54.637554 kernel: audit: type=1334 audit(1719325674.624:515): prog-id=131 op=LOAD Jun 25 14:27:54.637580 kernel: audit: type=1300 audit(1719325674.624:515): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2693 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:54.624000 audit[3244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2693 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:54.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537663762643063336162636231633665353564633939663963373739 Jun 25 14:27:54.684965 containerd[1244]: time="2024-06-25T14:27:54.684882784Z" level=info msg="StartContainer for \"e7f7bd0c3abcb1c6e55dc99f9c7794b53864efa94aeb7947cd197883e078f3e8\" returns successfully" Jun 25 14:27:54.797442 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 14:27:54.797605 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 14:27:55.559837 kubelet[2256]: E0625 14:27:55.559808 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:55.572286 kubelet[2256]: I0625 14:27:55.572248 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-p8lzk" podStartSLOduration=1.98779129 podCreationTimestamp="2024-06-25 14:27:41 +0000 UTC" firstStartedPulling="2024-06-25 14:27:41.93730254 +0000 UTC m=+20.607466277" lastFinishedPulling="2024-06-25 14:27:54.5217052 +0000 UTC m=+33.191868937" observedRunningTime="2024-06-25 14:27:55.57212647 +0000 UTC m=+34.242290207" watchObservedRunningTime="2024-06-25 14:27:55.57219395 +0000 UTC m=+34.242357687" Jun 25 14:27:56.062000 audit[3363]: AVC avc: denied { write } for pid=3363 comm="tee" name="fd" dev="proc" ino=19096 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:27:56.062000 audit[3363]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffeba55a1f a2=241 a3=1b6 items=1 ppid=3317 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.062000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 14:27:56.062000 audit: PATH item=0 name="/dev/fd/63" inode=19567 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:27:56.062000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:27:56.079000 audit[3369]: AVC avc: denied { write } for pid=3369 comm="tee" name="fd" dev="proc" ino=18423 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:27:56.079000 audit[3369]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe4ecaa2e a2=241 a3=1b6 items=1 ppid=3312 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.079000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 14:27:56.079000 audit: PATH item=0 name="/dev/fd/63" inode=19570 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:27:56.079000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:27:56.082000 audit[3373]: AVC avc: denied { write } for pid=3373 comm="tee" name="fd" dev="proc" ino=19577 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:27:56.082000 audit[3373]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd777fa2e a2=241 a3=1b6 items=1 ppid=3323 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.082000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 14:27:56.082000 audit: PATH item=0 name="/dev/fd/63" inode=18418 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:27:56.082000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:27:56.087000 audit[3386]: AVC avc: denied { write } for pid=3386 comm="tee" name="fd" dev="proc" ino=19112 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:27:56.087000 audit[3386]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcfd53a1e a2=241 a3=1b6 items=1 ppid=3321 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.087000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 14:27:56.087000 audit: PATH item=0 name="/dev/fd/63" inode=18425 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:27:56.087000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:27:56.088000 audit[3378]: AVC avc: denied { write } for pid=3378 comm="tee" name="fd" dev="proc" ino=19116 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:27:56.088000 audit[3378]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff71f7a2f a2=241 a3=1b6 items=1 ppid=3314 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.088000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 14:27:56.088000 audit: PATH item=0 name="/dev/fd/63" inode=19106 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:27:56.088000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:27:56.096000 audit[3381]: AVC avc: denied { write } for pid=3381 comm="tee" name="fd" dev="proc" ino=19582 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:27:56.096000 audit[3381]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffef63ba30 a2=241 a3=1b6 items=1 ppid=3313 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.096000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 14:27:56.096000 audit: PATH item=0 name="/dev/fd/63" inode=19107 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:27:56.096000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:27:56.126000 audit[3377]: AVC avc: denied { write } for pid=3377 comm="tee" name="fd" dev="proc" ino=19125 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:27:56.126000 audit[3377]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffd28ca2e a2=241 a3=1b6 items=1 ppid=3311 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.126000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 14:27:56.126000 audit: PATH item=0 name="/dev/fd/63" inode=20574 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:27:56.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:27:56.332426 systemd-networkd[1082]: vxlan.calico: Link UP Jun 25 14:27:56.332434 systemd-networkd[1082]: vxlan.calico: Gained carrier Jun 25 14:27:56.356000 audit: BPF prog-id=132 op=LOAD Jun 25 14:27:56.356000 audit[3462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffff5f80f8 a2=70 a3=ffffff5f8168 items=0 ppid=3322 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.356000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:27:56.356000 audit: BPF prog-id=132 op=UNLOAD Jun 25 14:27:56.356000 audit: BPF prog-id=133 op=LOAD Jun 25 14:27:56.356000 audit[3462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffff5f80f8 a2=70 a3=4b243c items=0 ppid=3322 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.356000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:27:56.357000 audit: BPF prog-id=133 op=UNLOAD Jun 25 14:27:56.357000 audit: BPF prog-id=134 op=LOAD Jun 25 14:27:56.357000 audit[3462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffff5f8098 a2=70 a3=ffffff5f8108 items=0 ppid=3322 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.357000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:27:56.357000 audit: BPF prog-id=134 op=UNLOAD Jun 25 14:27:56.357000 audit: BPF prog-id=135 op=LOAD Jun 25 14:27:56.357000 audit[3462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffff5f80c8 a2=70 a3=aab84a9 items=0 ppid=3322 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.357000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:27:56.376000 audit: BPF prog-id=135 op=UNLOAD Jun 25 14:27:56.414000 audit[3494]: NETFILTER_CFG table=mangle:99 family=2 entries=16 op=nft_register_chain pid=3494 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:27:56.414000 audit[3494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc211ca70 a2=0 a3=ffff98de4fa8 items=0 ppid=3322 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.414000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:27:56.417000 audit[3492]: NETFILTER_CFG table=raw:100 family=2 entries=19 op=nft_register_chain pid=3492 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:27:56.417000 audit[3492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6992 a0=3 a1=ffffd1ee8770 a2=0 a3=ffffbc82dfa8 items=0 ppid=3322 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.417000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:27:56.419000 audit[3493]: NETFILTER_CFG table=nat:101 family=2 entries=15 op=nft_register_chain pid=3493 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:27:56.419000 audit[3493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffeeffe070 a2=0 a3=ffffb6b2cfa8 items=0 ppid=3322 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.419000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:27:56.423000 audit[3497]: NETFILTER_CFG table=filter:102 family=2 entries=39 op=nft_register_chain pid=3497 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:27:56.423000 audit[3497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffd8a702d0 a2=0 a3=ffff8f42cfa8 items=0 ppid=3322 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:56.423000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:27:56.560580 kubelet[2256]: I0625 14:27:56.560551 2256 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:27:56.561315 kubelet[2256]: E0625 14:27:56.561299 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:27:58.130504 systemd-networkd[1082]: vxlan.calico: Gained IPv6LL Jun 25 14:27:58.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:41126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:58.418016 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:41126.service - OpenSSH per-connection server daemon (10.0.0.1:41126). Jun 25 14:27:58.452000 audit[3505]: USER_ACCT pid=3505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:58.453734 sshd[3505]: Accepted publickey for core from 10.0.0.1 port 41126 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:27:58.453000 audit[3505]: CRED_ACQ pid=3505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:58.453000 audit[3505]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffffae7710 a2=3 a3=1 items=0 ppid=1 pid=3505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:27:58.453000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:27:58.455387 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:27:58.459040 systemd-logind[1231]: New session 9 of user core. Jun 25 14:27:58.472508 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 14:27:58.474000 audit[3505]: USER_START pid=3505 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:58.476000 audit[3507]: CRED_ACQ pid=3507 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:58.637049 sshd[3505]: pam_unix(sshd:session): session closed for user core Jun 25 14:27:58.636000 audit[3505]: USER_END pid=3505 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:58.636000 audit[3505]: CRED_DISP pid=3505 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:27:58.639402 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 14:27:58.639996 systemd-logind[1231]: Session 9 logged out. Waiting for processes to exit. Jun 25 14:27:58.640113 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:41126.service: Deactivated successfully. Jun 25 14:27:58.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:41126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:27:58.641111 systemd-logind[1231]: Removed session 9. Jun 25 14:28:02.428087 containerd[1244]: time="2024-06-25T14:28:02.428035353Z" level=info msg="StopPodSandbox for \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\"" Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.511 [INFO][3542] k8s.go 608: Cleaning up netns ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.513 [INFO][3542] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" iface="eth0" netns="/var/run/netns/cni-430e9e36-8115-bc7c-1298-5d23fd3ae232" Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.516 [INFO][3542] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" iface="eth0" netns="/var/run/netns/cni-430e9e36-8115-bc7c-1298-5d23fd3ae232" Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.516 [INFO][3542] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" iface="eth0" netns="/var/run/netns/cni-430e9e36-8115-bc7c-1298-5d23fd3ae232" Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.517 [INFO][3542] k8s.go 615: Releasing IP address(es) ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.517 [INFO][3542] utils.go 188: Calico CNI releasing IP address ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.608 [INFO][3550] ipam_plugin.go 411: Releasing address using handleID ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" HandleID="k8s-pod-network.f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.608 [INFO][3550] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.608 [INFO][3550] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.618 [WARNING][3550] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" HandleID="k8s-pod-network.f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.618 [INFO][3550] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" HandleID="k8s-pod-network.f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.619 [INFO][3550] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:02.623027 containerd[1244]: 2024-06-25 14:28:02.621 [INFO][3542] k8s.go 621: Teardown processing complete. ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:02.624513 containerd[1244]: time="2024-06-25T14:28:02.624470571Z" level=info msg="TearDown network for sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\" successfully" Jun 25 14:28:02.624628 containerd[1244]: time="2024-06-25T14:28:02.624609611Z" level=info msg="StopPodSandbox for \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\" returns successfully" Jun 25 14:28:02.625427 kubelet[2256]: E0625 14:28:02.625378 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:02.626128 containerd[1244]: time="2024-06-25T14:28:02.626085051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-758q6,Uid:fbf7d824-77fc-4d35-a17d-65edab6216f5,Namespace:kube-system,Attempt:1,}" Jun 25 14:28:02.626300 systemd[1]: run-netns-cni\x2d430e9e36\x2d8115\x2dbc7c\x2d1298\x2d5d23fd3ae232.mount: Deactivated successfully. Jun 25 14:28:02.760567 systemd-networkd[1082]: calia3fcef45ffd: Link UP Jun 25 14:28:02.762687 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:28:02.762790 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia3fcef45ffd: link becomes ready Jun 25 14:28:02.763213 systemd-networkd[1082]: calia3fcef45ffd: Gained carrier Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.677 [INFO][3558] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--758q6-eth0 coredns-5dd5756b68- kube-system fbf7d824-77fc-4d35-a17d-65edab6216f5 763 0 2024-06-25 14:27:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-758q6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia3fcef45ffd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" Namespace="kube-system" Pod="coredns-5dd5756b68-758q6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--758q6-" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.677 [INFO][3558] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" Namespace="kube-system" Pod="coredns-5dd5756b68-758q6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.706 [INFO][3572] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" HandleID="k8s-pod-network.ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.718 [INFO][3572] ipam_plugin.go 264: Auto assigning IP ContainerID="ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" HandleID="k8s-pod-network.ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000344130), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-758q6", "timestamp":"2024-06-25 14:28:02.706737658 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.718 [INFO][3572] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.718 [INFO][3572] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.718 [INFO][3572] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.720 [INFO][3572] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" host="localhost" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.728 [INFO][3572] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.735 [INFO][3572] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.740 [INFO][3572] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.743 [INFO][3572] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.743 [INFO][3572] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" host="localhost" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.747 [INFO][3572] ipam.go 1685: Creating new handle: k8s-pod-network.ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.751 [INFO][3572] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" host="localhost" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.755 [INFO][3572] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" host="localhost" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.756 [INFO][3572] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" host="localhost" Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.756 [INFO][3572] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:02.782265 containerd[1244]: 2024-06-25 14:28:02.756 [INFO][3572] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" HandleID="k8s-pod-network.ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:02.782927 containerd[1244]: 2024-06-25 14:28:02.758 [INFO][3558] k8s.go 386: Populated endpoint ContainerID="ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" Namespace="kube-system" Pod="coredns-5dd5756b68-758q6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--758q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--758q6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"fbf7d824-77fc-4d35-a17d-65edab6216f5", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-758q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3fcef45ffd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:02.782927 containerd[1244]: 2024-06-25 14:28:02.758 [INFO][3558] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" Namespace="kube-system" Pod="coredns-5dd5756b68-758q6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:02.782927 containerd[1244]: 2024-06-25 14:28:02.758 [INFO][3558] dataplane_linux.go 68: Setting the host side veth name to calia3fcef45ffd ContainerID="ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" Namespace="kube-system" Pod="coredns-5dd5756b68-758q6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:02.782927 containerd[1244]: 2024-06-25 14:28:02.765 [INFO][3558] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" Namespace="kube-system" Pod="coredns-5dd5756b68-758q6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:02.782927 containerd[1244]: 2024-06-25 14:28:02.765 [INFO][3558] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" Namespace="kube-system" Pod="coredns-5dd5756b68-758q6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--758q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--758q6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"fbf7d824-77fc-4d35-a17d-65edab6216f5", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e", Pod:"coredns-5dd5756b68-758q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3fcef45ffd", MAC:"82:f9:49:7a:3a:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:02.782927 containerd[1244]: 2024-06-25 14:28:02.775 [INFO][3558] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e" Namespace="kube-system" Pod="coredns-5dd5756b68-758q6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:02.795000 audit[3596]: NETFILTER_CFG table=filter:103 family=2 entries=34 op=nft_register_chain pid=3596 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:28:02.798651 kernel: kauditd_printk_skb: 75 callbacks suppressed Jun 25 14:28:02.798704 kernel: audit: type=1325 audit(1719325682.795:544): table=filter:103 family=2 entries=34 op=nft_register_chain pid=3596 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:28:02.795000 audit[3596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=fffffd37d160 a2=0 a3=ffff87188fa8 items=0 ppid=3322 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:02.802742 kernel: audit: type=1300 audit(1719325682.795:544): arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=fffffd37d160 a2=0 a3=ffff87188fa8 items=0 ppid=3322 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:02.795000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:28:02.804647 kernel: audit: type=1327 audit(1719325682.795:544): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:28:02.818390 containerd[1244]: time="2024-06-25T14:28:02.818263068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:28:02.818390 containerd[1244]: time="2024-06-25T14:28:02.818333308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:28:02.818390 containerd[1244]: time="2024-06-25T14:28:02.818367148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:28:02.818390 containerd[1244]: time="2024-06-25T14:28:02.818379588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:28:02.839538 systemd[1]: Started cri-containerd-ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e.scope - libcontainer container ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e. Jun 25 14:28:02.846000 audit: BPF prog-id=136 op=LOAD Jun 25 14:28:02.847000 audit: BPF prog-id=137 op=LOAD Jun 25 14:28:02.849736 kernel: audit: type=1334 audit(1719325682.846:545): prog-id=136 op=LOAD Jun 25 14:28:02.849788 kernel: audit: type=1334 audit(1719325682.847:546): prog-id=137 op=LOAD Jun 25 14:28:02.849808 kernel: audit: type=1300 audit(1719325682.847:546): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3605 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:02.847000 audit[3615]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3605 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:02.852193 kernel: audit: type=1327 audit(1719325682.847:546): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565306163623130343338646335643133383065333365346239653932 Jun 25 14:28:02.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565306163623130343338646335643133383065333365346239653932 Jun 25 14:28:02.857223 kernel: audit: type=1334 audit(1719325682.847:547): prog-id=138 op=LOAD Jun 25 14:28:02.857289 kernel: audit: type=1300 audit(1719325682.847:547): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3605 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:02.847000 audit: BPF prog-id=138 op=LOAD Jun 25 14:28:02.847000 audit[3615]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3605 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:02.859370 kernel: audit: type=1327 audit(1719325682.847:547): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565306163623130343338646335643133383065333365346239653932 Jun 25 14:28:02.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565306163623130343338646335643133383065333365346239653932 Jun 25 14:28:02.860561 systemd-resolved[1185]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:28:02.848000 audit: BPF prog-id=138 op=UNLOAD Jun 25 14:28:02.848000 audit: BPF prog-id=137 op=UNLOAD Jun 25 14:28:02.848000 audit: BPF prog-id=139 op=LOAD Jun 25 14:28:02.848000 audit[3615]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3605 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:02.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565306163623130343338646335643133383065333365346239653932 Jun 25 14:28:02.877088 containerd[1244]: time="2024-06-25T14:28:02.877035593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-758q6,Uid:fbf7d824-77fc-4d35-a17d-65edab6216f5,Namespace:kube-system,Attempt:1,} returns sandbox id \"ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e\"" Jun 25 14:28:02.877993 kubelet[2256]: E0625 14:28:02.877807 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:02.880764 containerd[1244]: time="2024-06-25T14:28:02.880716553Z" level=info msg="CreateContainer within sandbox \"ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:28:02.891748 containerd[1244]: time="2024-06-25T14:28:02.891696194Z" level=info msg="CreateContainer within sandbox \"ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a90277ed7439416b5a737dab6fce5d329db2be6978db1bfb16fa9d66d5f68489\"" Jun 25 14:28:02.892697 containerd[1244]: time="2024-06-25T14:28:02.892624234Z" level=info msg="StartContainer for \"a90277ed7439416b5a737dab6fce5d329db2be6978db1bfb16fa9d66d5f68489\"" Jun 25 14:28:02.916528 systemd[1]: Started cri-containerd-a90277ed7439416b5a737dab6fce5d329db2be6978db1bfb16fa9d66d5f68489.scope - libcontainer container a90277ed7439416b5a737dab6fce5d329db2be6978db1bfb16fa9d66d5f68489. Jun 25 14:28:02.924000 audit: BPF prog-id=140 op=LOAD Jun 25 14:28:02.925000 audit: BPF prog-id=141 op=LOAD Jun 25 14:28:02.925000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3605 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:02.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139303237376564373433393431366235613733376461623666636535 Jun 25 14:28:02.925000 audit: BPF prog-id=142 op=LOAD Jun 25 14:28:02.925000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3605 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:02.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139303237376564373433393431366235613733376461623666636535 Jun 25 14:28:02.925000 audit: BPF prog-id=142 op=UNLOAD Jun 25 14:28:02.925000 audit: BPF prog-id=141 op=UNLOAD Jun 25 14:28:02.925000 audit: BPF prog-id=143 op=LOAD Jun 25 14:28:02.925000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3605 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:02.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139303237376564373433393431366235613733376461623666636535 Jun 25 14:28:02.974038 containerd[1244]: time="2024-06-25T14:28:02.973979321Z" level=info msg="StartContainer for \"a90277ed7439416b5a737dab6fce5d329db2be6978db1bfb16fa9d66d5f68489\" returns successfully" Jun 25 14:28:03.581823 kubelet[2256]: E0625 14:28:03.581007 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:03.592064 kubelet[2256]: I0625 14:28:03.592030 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-758q6" podStartSLOduration=28.591993692 podCreationTimestamp="2024-06-25 14:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:28:03.590420652 +0000 UTC m=+42.260584389" watchObservedRunningTime="2024-06-25 14:28:03.591993692 +0000 UTC m=+42.262157429" Jun 25 14:28:03.606000 audit[3678]: NETFILTER_CFG table=filter:104 family=2 entries=14 op=nft_register_rule pid=3678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:03.606000 audit[3678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffd09e5ca0 a2=0 a3=1 items=0 ppid=2430 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:03.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:03.607000 audit[3678]: NETFILTER_CFG table=nat:105 family=2 entries=14 op=nft_register_rule pid=3678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:03.607000 audit[3678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd09e5ca0 a2=0 a3=1 items=0 ppid=2430 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:03.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:03.625000 audit[3680]: NETFILTER_CFG table=filter:106 family=2 entries=11 op=nft_register_rule pid=3680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:03.625000 audit[3680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd7168be0 a2=0 a3=1 items=0 ppid=2430 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:03.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:03.626000 audit[3680]: NETFILTER_CFG table=nat:107 family=2 entries=35 op=nft_register_chain pid=3680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:03.626000 audit[3680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd7168be0 a2=0 a3=1 items=0 ppid=2430 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:03.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:03.649002 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:41450.service - OpenSSH per-connection server daemon (10.0.0.1:41450). Jun 25 14:28:03.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:41450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:03.680000 audit[3683]: USER_ACCT pid=3683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:03.682336 sshd[3683]: Accepted publickey for core from 10.0.0.1 port 41450 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:03.681000 audit[3683]: CRED_ACQ pid=3683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:03.681000 audit[3683]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffde0ba670 a2=3 a3=1 items=0 ppid=1 pid=3683 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:03.681000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:03.683729 sshd[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:03.687151 systemd-logind[1231]: New session 10 of user core. Jun 25 14:28:03.696584 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 14:28:03.698000 audit[3683]: USER_START pid=3683 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:03.699000 audit[3685]: CRED_ACQ pid=3685 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:03.827480 systemd-networkd[1082]: calia3fcef45ffd: Gained IPv6LL Jun 25 14:28:03.884729 sshd[3683]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:03.884000 audit[3683]: USER_END pid=3683 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:03.885000 audit[3683]: CRED_DISP pid=3683 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:03.892709 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:41450.service: Deactivated successfully. Jun 25 14:28:03.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:41450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:03.893390 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 14:28:03.894112 systemd-logind[1231]: Session 10 logged out. Waiting for processes to exit. Jun 25 14:28:03.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.85:22-10.0.0.1:41462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:03.895376 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:41462.service - OpenSSH per-connection server daemon (10.0.0.1:41462). Jun 25 14:28:03.897997 systemd-logind[1231]: Removed session 10. Jun 25 14:28:03.926000 audit[3697]: USER_ACCT pid=3697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:03.927644 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 41462 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:03.927000 audit[3697]: CRED_ACQ pid=3697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:03.927000 audit[3697]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc7ed4e0 a2=3 a3=1 items=0 ppid=1 pid=3697 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:03.927000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:03.928761 sshd[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:03.932243 systemd-logind[1231]: New session 11 of user core. Jun 25 14:28:03.940521 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 14:28:03.943000 audit[3697]: USER_START pid=3697 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:03.944000 audit[3699]: CRED_ACQ pid=3699 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:04.260307 sshd[3697]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:04.261000 audit[3697]: USER_END pid=3697 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:04.261000 audit[3697]: CRED_DISP pid=3697 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:04.270138 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:41468.service - OpenSSH per-connection server daemon (10.0.0.1:41468). Jun 25 14:28:04.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.85:22-10.0.0.1:41468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:04.270844 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:41462.service: Deactivated successfully. Jun 25 14:28:04.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.85:22-10.0.0.1:41462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:04.271842 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 14:28:04.273752 systemd-logind[1231]: Session 11 logged out. Waiting for processes to exit. Jun 25 14:28:04.275438 systemd-logind[1231]: Removed session 11. Jun 25 14:28:04.307000 audit[3708]: USER_ACCT pid=3708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:04.308986 sshd[3708]: Accepted publickey for core from 10.0.0.1 port 41468 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:04.308000 audit[3708]: CRED_ACQ pid=3708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:04.309000 audit[3708]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe9572220 a2=3 a3=1 items=0 ppid=1 pid=3708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:04.309000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:04.310922 sshd[3708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:04.315858 systemd-logind[1231]: New session 12 of user core. Jun 25 14:28:04.328227 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 14:28:04.332000 audit[3708]: USER_START pid=3708 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:04.334000 audit[3711]: CRED_ACQ pid=3711 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:04.483993 sshd[3708]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:04.483000 audit[3708]: USER_END pid=3708 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:04.483000 audit[3708]: CRED_DISP pid=3708 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:04.486565 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:41468.service: Deactivated successfully. Jun 25 14:28:04.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.85:22-10.0.0.1:41468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:04.487312 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 14:28:04.488285 systemd-logind[1231]: Session 12 logged out. Waiting for processes to exit. Jun 25 14:28:04.489071 systemd-logind[1231]: Removed session 12. Jun 25 14:28:04.582729 kubelet[2256]: E0625 14:28:04.582620 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:05.427886 containerd[1244]: time="2024-06-25T14:28:05.427830875Z" level=info msg="StopPodSandbox for \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\"" Jun 25 14:28:05.428288 containerd[1244]: time="2024-06-25T14:28:05.428251835Z" level=info msg="StopPodSandbox for \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\"" Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.472 [INFO][3757] k8s.go 608: Cleaning up netns ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.472 [INFO][3757] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" iface="eth0" netns="/var/run/netns/cni-b7873ce0-26fe-cc22-999f-6da42b23d140" Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.473 [INFO][3757] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" iface="eth0" netns="/var/run/netns/cni-b7873ce0-26fe-cc22-999f-6da42b23d140" Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.473 [INFO][3757] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" iface="eth0" netns="/var/run/netns/cni-b7873ce0-26fe-cc22-999f-6da42b23d140" Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.473 [INFO][3757] k8s.go 615: Releasing IP address(es) ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.473 [INFO][3757] utils.go 188: Calico CNI releasing IP address ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.498 [INFO][3773] ipam_plugin.go 411: Releasing address using handleID ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" HandleID="k8s-pod-network.e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.498 [INFO][3773] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.499 [INFO][3773] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.512 [WARNING][3773] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" HandleID="k8s-pod-network.e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.512 [INFO][3773] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" HandleID="k8s-pod-network.e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.514 [INFO][3773] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:05.522212 containerd[1244]: 2024-06-25 14:28:05.517 [INFO][3757] k8s.go 621: Teardown processing complete. ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:05.524560 containerd[1244]: time="2024-06-25T14:28:05.524508602Z" level=info msg="TearDown network for sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\" successfully" Jun 25 14:28:05.524560 containerd[1244]: time="2024-06-25T14:28:05.524559162Z" level=info msg="StopPodSandbox for \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\" returns successfully" Jun 25 14:28:05.525700 systemd[1]: run-netns-cni\x2db7873ce0\x2d26fe\x2dcc22\x2d999f\x2d6da42b23d140.mount: Deactivated successfully. Jun 25 14:28:05.527041 kubelet[2256]: E0625 14:28:05.526617 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:05.527569 containerd[1244]: time="2024-06-25T14:28:05.527480922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-68qx4,Uid:e7c7496b-13ed-42c4-b1e1-6a2ce57313f1,Namespace:kube-system,Attempt:1,}" Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.481 [INFO][3758] k8s.go 608: Cleaning up netns ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.482 [INFO][3758] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" iface="eth0" netns="/var/run/netns/cni-ad6d8f0b-17a1-147a-8ce5-3a0ad6a9228e" Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.482 [INFO][3758] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" iface="eth0" netns="/var/run/netns/cni-ad6d8f0b-17a1-147a-8ce5-3a0ad6a9228e" Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.482 [INFO][3758] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" iface="eth0" netns="/var/run/netns/cni-ad6d8f0b-17a1-147a-8ce5-3a0ad6a9228e" Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.482 [INFO][3758] k8s.go 615: Releasing IP address(es) ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.482 [INFO][3758] utils.go 188: Calico CNI releasing IP address ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.500 [INFO][3779] ipam_plugin.go 411: Releasing address using handleID ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" HandleID="k8s-pod-network.fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.500 [INFO][3779] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.514 [INFO][3779] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.526 [WARNING][3779] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" HandleID="k8s-pod-network.fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.526 [INFO][3779] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" HandleID="k8s-pod-network.fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.528 [INFO][3779] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:05.533024 containerd[1244]: 2024-06-25 14:28:05.531 [INFO][3758] k8s.go 621: Teardown processing complete. ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:05.533696 containerd[1244]: time="2024-06-25T14:28:05.533666642Z" level=info msg="TearDown network for sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\" successfully" Jun 25 14:28:05.533781 containerd[1244]: time="2024-06-25T14:28:05.533761602Z" level=info msg="StopPodSandbox for \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\" returns successfully" Jun 25 14:28:05.535438 systemd[1]: run-netns-cni\x2dad6d8f0b\x2d17a1\x2d147a\x2d8ce5\x2d3a0ad6a9228e.mount: Deactivated successfully. Jun 25 14:28:05.536495 containerd[1244]: time="2024-06-25T14:28:05.536454563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7857d6897f-22rg5,Uid:2dba4c0d-dc07-4d6e-a4e1-19d948f912fa,Namespace:calico-system,Attempt:1,}" Jun 25 14:28:05.584381 kubelet[2256]: E0625 14:28:05.584070 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:05.682908 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:28:05.683017 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali606725fa246: link becomes ready Jun 25 14:28:05.683145 systemd-networkd[1082]: cali606725fa246: Link UP Jun 25 14:28:05.683305 systemd-networkd[1082]: cali606725fa246: Gained carrier Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.601 [INFO][3791] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--68qx4-eth0 coredns-5dd5756b68- kube-system e7c7496b-13ed-42c4-b1e1-6a2ce57313f1 816 0 2024-06-25 14:27:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-68qx4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali606725fa246 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" Namespace="kube-system" Pod="coredns-5dd5756b68-68qx4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--68qx4-" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.601 [INFO][3791] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" Namespace="kube-system" Pod="coredns-5dd5756b68-68qx4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.625 [INFO][3821] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" HandleID="k8s-pod-network.97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.645 [INFO][3821] ipam_plugin.go 264: Auto assigning IP ContainerID="97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" HandleID="k8s-pod-network.97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030a5c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-68qx4", "timestamp":"2024-06-25 14:28:05.625062929 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.645 [INFO][3821] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.646 [INFO][3821] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.646 [INFO][3821] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.648 [INFO][3821] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" host="localhost" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.654 [INFO][3821] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.658 [INFO][3821] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.660 [INFO][3821] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.662 [INFO][3821] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.662 [INFO][3821] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" host="localhost" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.666 [INFO][3821] ipam.go 1685: Creating new handle: k8s-pod-network.97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876 Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.669 [INFO][3821] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" host="localhost" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.673 [INFO][3821] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" host="localhost" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.673 [INFO][3821] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" host="localhost" Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.674 [INFO][3821] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:05.698486 containerd[1244]: 2024-06-25 14:28:05.674 [INFO][3821] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" HandleID="k8s-pod-network.97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:05.699140 containerd[1244]: 2024-06-25 14:28:05.677 [INFO][3791] k8s.go 386: Populated endpoint ContainerID="97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" Namespace="kube-system" Pod="coredns-5dd5756b68-68qx4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--68qx4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e7c7496b-13ed-42c4-b1e1-6a2ce57313f1", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-68qx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali606725fa246", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:05.699140 containerd[1244]: 2024-06-25 14:28:05.677 [INFO][3791] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" Namespace="kube-system" Pod="coredns-5dd5756b68-68qx4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:05.699140 containerd[1244]: 2024-06-25 14:28:05.677 [INFO][3791] dataplane_linux.go 68: Setting the host side veth name to cali606725fa246 ContainerID="97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" Namespace="kube-system" Pod="coredns-5dd5756b68-68qx4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:05.699140 containerd[1244]: 2024-06-25 14:28:05.685 [INFO][3791] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" Namespace="kube-system" Pod="coredns-5dd5756b68-68qx4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:05.699140 containerd[1244]: 2024-06-25 14:28:05.686 [INFO][3791] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" Namespace="kube-system" Pod="coredns-5dd5756b68-68qx4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--68qx4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e7c7496b-13ed-42c4-b1e1-6a2ce57313f1", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876", Pod:"coredns-5dd5756b68-68qx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali606725fa246", MAC:"12:3b:90:8b:69:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:05.699140 containerd[1244]: 2024-06-25 14:28:05.696 [INFO][3791] k8s.go 500: Wrote updated endpoint to datastore ContainerID="97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876" Namespace="kube-system" Pod="coredns-5dd5756b68-68qx4" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:05.704000 audit[3845]: NETFILTER_CFG table=filter:108 family=2 entries=30 op=nft_register_chain pid=3845 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:28:05.704000 audit[3845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=17032 a0=3 a1=ffffebea3b60 a2=0 a3=ffffbee4ffa8 items=0 ppid=3322 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:05.704000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:28:05.714995 systemd-networkd[1082]: cali40851ec1a0d: Link UP Jun 25 14:28:05.716097 systemd-networkd[1082]: cali40851ec1a0d: Gained carrier Jun 25 14:28:05.716445 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali40851ec1a0d: link becomes ready Jun 25 14:28:05.724999 containerd[1244]: time="2024-06-25T14:28:05.724928976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:28:05.724999 containerd[1244]: time="2024-06-25T14:28:05.724975616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:28:05.725440 containerd[1244]: time="2024-06-25T14:28:05.725307456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:28:05.725440 containerd[1244]: time="2024-06-25T14:28:05.725396296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:28:05.736000 audit[3872]: NETFILTER_CFG table=filter:109 family=2 entries=42 op=nft_register_chain pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:28:05.736000 audit[3872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21524 a0=3 a1=ffffd712c210 a2=0 a3=ffff8b466fa8 items=0 ppid=3322 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:05.736000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.599 [INFO][3802] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0 calico-kube-controllers-7857d6897f- calico-system 2dba4c0d-dc07-4d6e-a4e1-19d948f912fa 817 0 2024-06-25 14:27:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7857d6897f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7857d6897f-22rg5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali40851ec1a0d [] []}} ContainerID="74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" Namespace="calico-system" Pod="calico-kube-controllers-7857d6897f-22rg5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.599 [INFO][3802] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" Namespace="calico-system" Pod="calico-kube-controllers-7857d6897f-22rg5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.625 [INFO][3820] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" HandleID="k8s-pod-network.74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.648 [INFO][3820] ipam_plugin.go 264: Auto assigning IP ContainerID="74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" HandleID="k8s-pod-network.74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000393670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7857d6897f-22rg5", "timestamp":"2024-06-25 14:28:05.625759489 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.648 [INFO][3820] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.674 [INFO][3820] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.674 [INFO][3820] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.675 [INFO][3820] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" host="localhost" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.680 [INFO][3820] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.687 [INFO][3820] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.689 [INFO][3820] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.695 [INFO][3820] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.695 [INFO][3820] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" host="localhost" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.699 [INFO][3820] ipam.go 1685: Creating new handle: k8s-pod-network.74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169 Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.704 [INFO][3820] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" host="localhost" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.709 [INFO][3820] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" host="localhost" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.709 [INFO][3820] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" host="localhost" Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.709 [INFO][3820] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:05.739232 containerd[1244]: 2024-06-25 14:28:05.709 [INFO][3820] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" HandleID="k8s-pod-network.74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:05.739746 containerd[1244]: 2024-06-25 14:28:05.711 [INFO][3802] k8s.go 386: Populated endpoint ContainerID="74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" Namespace="calico-system" Pod="calico-kube-controllers-7857d6897f-22rg5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0", GenerateName:"calico-kube-controllers-7857d6897f-", Namespace:"calico-system", SelfLink:"", UID:"2dba4c0d-dc07-4d6e-a4e1-19d948f912fa", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7857d6897f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7857d6897f-22rg5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali40851ec1a0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:05.739746 containerd[1244]: 2024-06-25 14:28:05.712 [INFO][3802] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" Namespace="calico-system" Pod="calico-kube-controllers-7857d6897f-22rg5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:05.739746 containerd[1244]: 2024-06-25 14:28:05.712 [INFO][3802] dataplane_linux.go 68: Setting the host side veth name to cali40851ec1a0d ContainerID="74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" Namespace="calico-system" Pod="calico-kube-controllers-7857d6897f-22rg5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:05.739746 containerd[1244]: 2024-06-25 14:28:05.716 [INFO][3802] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" Namespace="calico-system" Pod="calico-kube-controllers-7857d6897f-22rg5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:05.739746 containerd[1244]: 2024-06-25 14:28:05.716 [INFO][3802] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" Namespace="calico-system" Pod="calico-kube-controllers-7857d6897f-22rg5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0", GenerateName:"calico-kube-controllers-7857d6897f-", Namespace:"calico-system", SelfLink:"", UID:"2dba4c0d-dc07-4d6e-a4e1-19d948f912fa", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7857d6897f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169", Pod:"calico-kube-controllers-7857d6897f-22rg5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali40851ec1a0d", MAC:"ea:0a:f8:7e:fc:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:05.739746 containerd[1244]: 2024-06-25 14:28:05.732 [INFO][3802] k8s.go 500: Wrote updated endpoint to datastore ContainerID="74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169" Namespace="calico-system" Pod="calico-kube-controllers-7857d6897f-22rg5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:05.767526 systemd[1]: Started cri-containerd-97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876.scope - libcontainer container 97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876. Jun 25 14:28:05.775000 audit: BPF prog-id=144 op=LOAD Jun 25 14:28:05.776000 audit: BPF prog-id=145 op=LOAD Jun 25 14:28:05.776000 audit[3879]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3861 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:05.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937653039613566393364383030336436653363313930333838343862 Jun 25 14:28:05.776000 audit: BPF prog-id=146 op=LOAD Jun 25 14:28:05.776000 audit[3879]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3861 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:05.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937653039613566393364383030336436653363313930333838343862 Jun 25 14:28:05.776000 audit: BPF prog-id=146 op=UNLOAD Jun 25 14:28:05.776000 audit: BPF prog-id=145 op=UNLOAD Jun 25 14:28:05.776000 audit: BPF prog-id=147 op=LOAD Jun 25 14:28:05.776000 audit[3879]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3861 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:05.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937653039613566393364383030336436653363313930333838343862 Jun 25 14:28:05.778423 systemd-resolved[1185]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:28:05.789527 containerd[1244]: time="2024-06-25T14:28:05.789427261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:28:05.789653 containerd[1244]: time="2024-06-25T14:28:05.789489661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:28:05.789653 containerd[1244]: time="2024-06-25T14:28:05.789512581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:28:05.789653 containerd[1244]: time="2024-06-25T14:28:05.789526021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:28:05.801562 containerd[1244]: time="2024-06-25T14:28:05.801521182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-68qx4,Uid:e7c7496b-13ed-42c4-b1e1-6a2ce57313f1,Namespace:kube-system,Attempt:1,} returns sandbox id \"97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876\"" Jun 25 14:28:05.802239 kubelet[2256]: E0625 14:28:05.802199 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:05.804626 containerd[1244]: time="2024-06-25T14:28:05.804587902Z" level=info msg="CreateContainer within sandbox \"97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:28:05.815972 containerd[1244]: time="2024-06-25T14:28:05.815927543Z" level=info msg="CreateContainer within sandbox \"97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11ac95b81657c66f53a3eae1200f588dd463d69e6a95f998fb521c903507e9be\"" Jun 25 14:28:05.817486 containerd[1244]: time="2024-06-25T14:28:05.817454583Z" level=info msg="StartContainer for \"11ac95b81657c66f53a3eae1200f588dd463d69e6a95f998fb521c903507e9be\"" Jun 25 14:28:05.818540 systemd[1]: Started cri-containerd-74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169.scope - libcontainer container 74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169. Jun 25 14:28:05.836000 audit: BPF prog-id=148 op=LOAD Jun 25 14:28:05.836000 audit: BPF prog-id=149 op=LOAD Jun 25 14:28:05.836000 audit[3925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3909 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:05.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734646232353166343530333166653430323566316361626165376631 Jun 25 14:28:05.836000 audit: BPF prog-id=150 op=LOAD Jun 25 14:28:05.836000 audit[3925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3909 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:05.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734646232353166343530333166653430323566316361626165376631 Jun 25 14:28:05.836000 audit: BPF prog-id=150 op=UNLOAD Jun 25 14:28:05.836000 audit: BPF prog-id=149 op=UNLOAD Jun 25 14:28:05.836000 audit: BPF prog-id=151 op=LOAD Jun 25 14:28:05.836000 audit[3925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3909 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:05.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734646232353166343530333166653430323566316361626165376631 Jun 25 14:28:05.838849 systemd-resolved[1185]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:28:05.853503 systemd[1]: Started cri-containerd-11ac95b81657c66f53a3eae1200f588dd463d69e6a95f998fb521c903507e9be.scope - libcontainer container 11ac95b81657c66f53a3eae1200f588dd463d69e6a95f998fb521c903507e9be. Jun 25 14:28:05.863397 containerd[1244]: time="2024-06-25T14:28:05.863255266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7857d6897f-22rg5,Uid:2dba4c0d-dc07-4d6e-a4e1-19d948f912fa,Namespace:calico-system,Attempt:1,} returns sandbox id \"74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169\"" Jun 25 14:28:05.864000 audit: BPF prog-id=152 op=LOAD Jun 25 14:28:05.866166 containerd[1244]: time="2024-06-25T14:28:05.865934906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 14:28:05.865000 audit: BPF prog-id=153 op=LOAD Jun 25 14:28:05.865000 audit[3954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3861 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:05.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131616339356238313635376336366635336133656165313230306635 Jun 25 14:28:05.865000 audit: BPF prog-id=154 op=LOAD Jun 25 14:28:05.865000 audit[3954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3861 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:05.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131616339356238313635376336366635336133656165313230306635 Jun 25 14:28:05.865000 audit: BPF prog-id=154 op=UNLOAD Jun 25 14:28:05.865000 audit: BPF prog-id=153 op=UNLOAD Jun 25 14:28:05.865000 audit: BPF prog-id=155 op=LOAD Jun 25 14:28:05.865000 audit[3954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3861 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:05.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131616339356238313635376336366635336133656165313230306635 Jun 25 14:28:05.879912 containerd[1244]: time="2024-06-25T14:28:05.879868627Z" level=info msg="StartContainer for \"11ac95b81657c66f53a3eae1200f588dd463d69e6a95f998fb521c903507e9be\" returns successfully" Jun 25 14:28:06.428657 containerd[1244]: time="2024-06-25T14:28:06.428593628Z" level=info msg="StopPodSandbox for \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\"" Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.478 [INFO][4014] k8s.go 608: Cleaning up netns ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.479 [INFO][4014] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" iface="eth0" netns="/var/run/netns/cni-3c0f054b-f78f-82a9-5f5f-bd621704ce16" Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.479 [INFO][4014] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" iface="eth0" netns="/var/run/netns/cni-3c0f054b-f78f-82a9-5f5f-bd621704ce16" Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.479 [INFO][4014] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" iface="eth0" netns="/var/run/netns/cni-3c0f054b-f78f-82a9-5f5f-bd621704ce16" Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.479 [INFO][4014] k8s.go 615: Releasing IP address(es) ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.479 [INFO][4014] utils.go 188: Calico CNI releasing IP address ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.498 [INFO][4024] ipam_plugin.go 411: Releasing address using handleID ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" HandleID="k8s-pod-network.c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.498 [INFO][4024] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.498 [INFO][4024] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.512 [WARNING][4024] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" HandleID="k8s-pod-network.c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.512 [INFO][4024] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" HandleID="k8s-pod-network.c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.513 [INFO][4024] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:06.517050 containerd[1244]: 2024-06-25 14:28:06.515 [INFO][4014] k8s.go 621: Teardown processing complete. ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:06.517822 containerd[1244]: time="2024-06-25T14:28:06.517235959Z" level=info msg="TearDown network for sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\" successfully" Jun 25 14:28:06.517822 containerd[1244]: time="2024-06-25T14:28:06.517285919Z" level=info msg="StopPodSandbox for \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\" returns successfully" Jun 25 14:28:06.518027 containerd[1244]: time="2024-06-25T14:28:06.517898162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm8qv,Uid:eca06206-c460-41f2-8686-c513e245df74,Namespace:calico-system,Attempt:1,}" Jun 25 14:28:06.526175 systemd[1]: run-netns-cni\x2d3c0f054b\x2df78f\x2d82a9\x2d5f5f\x2dbd621704ce16.mount: Deactivated successfully. Jun 25 14:28:06.592021 kubelet[2256]: E0625 14:28:06.591986 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:06.607864 kubelet[2256]: I0625 14:28:06.607336 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-68qx4" podStartSLOduration=31.607297176 podCreationTimestamp="2024-06-25 14:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:28:06.606744614 +0000 UTC m=+45.276908351" watchObservedRunningTime="2024-06-25 14:28:06.607297176 +0000 UTC m=+45.277460873" Jun 25 14:28:06.621000 audit[4053]: NETFILTER_CFG table=filter:110 family=2 entries=8 op=nft_register_rule pid=4053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:06.621000 audit[4053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffee80cbe0 a2=0 a3=1 items=0 ppid=2430 pid=4053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:06.621000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:06.623000 audit[4053]: NETFILTER_CFG table=nat:111 family=2 entries=44 op=nft_register_rule pid=4053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:06.623000 audit[4053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffee80cbe0 a2=0 a3=1 items=0 ppid=2430 pid=4053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:06.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:06.635000 audit[4055]: NETFILTER_CFG table=filter:112 family=2 entries=8 op=nft_register_rule pid=4055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:06.635000 audit[4055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffecdc8200 a2=0 a3=1 items=0 ppid=2430 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:06.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:06.647000 audit[4055]: NETFILTER_CFG table=nat:113 family=2 entries=56 op=nft_register_chain pid=4055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:06.647000 audit[4055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffecdc8200 a2=0 a3=1 items=0 ppid=2430 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:06.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:06.656513 systemd-networkd[1082]: cali4a44247f184: Link UP Jun 25 14:28:06.657849 systemd-networkd[1082]: cali4a44247f184: Gained carrier Jun 25 14:28:06.658373 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4a44247f184: link becomes ready Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.562 [INFO][4031] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jm8qv-eth0 csi-node-driver- calico-system eca06206-c460-41f2-8686-c513e245df74 835 0 2024-06-25 14:27:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-jm8qv eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali4a44247f184 [] []}} ContainerID="4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" Namespace="calico-system" Pod="csi-node-driver-jm8qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jm8qv-" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.562 [INFO][4031] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" Namespace="calico-system" Pod="csi-node-driver-jm8qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.600 [INFO][4044] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" HandleID="k8s-pod-network.4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.623 [INFO][4044] ipam_plugin.go 264: Auto assigning IP ContainerID="4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" HandleID="k8s-pod-network.4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000623dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jm8qv", "timestamp":"2024-06-25 14:28:06.600117226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.623 [INFO][4044] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.623 [INFO][4044] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.623 [INFO][4044] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.626 [INFO][4044] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" host="localhost" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.630 [INFO][4044] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.637 [INFO][4044] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.639 [INFO][4044] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.641 [INFO][4044] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.641 [INFO][4044] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" host="localhost" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.643 [INFO][4044] ipam.go 1685: Creating new handle: k8s-pod-network.4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258 Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.646 [INFO][4044] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" host="localhost" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.652 [INFO][4044] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" host="localhost" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.652 [INFO][4044] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" host="localhost" Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.652 [INFO][4044] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:06.669534 containerd[1244]: 2024-06-25 14:28:06.652 [INFO][4044] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" HandleID="k8s-pod-network.4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:06.670377 containerd[1244]: 2024-06-25 14:28:06.654 [INFO][4031] k8s.go 386: Populated endpoint ContainerID="4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" Namespace="calico-system" Pod="csi-node-driver-jm8qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jm8qv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jm8qv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eca06206-c460-41f2-8686-c513e245df74", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jm8qv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4a44247f184", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:06.670377 containerd[1244]: 2024-06-25 14:28:06.654 [INFO][4031] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" Namespace="calico-system" Pod="csi-node-driver-jm8qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:06.670377 containerd[1244]: 2024-06-25 14:28:06.654 [INFO][4031] dataplane_linux.go 68: Setting the host side veth name to cali4a44247f184 ContainerID="4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" Namespace="calico-system" Pod="csi-node-driver-jm8qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:06.670377 containerd[1244]: 2024-06-25 14:28:06.658 [INFO][4031] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" Namespace="calico-system" Pod="csi-node-driver-jm8qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:06.670377 containerd[1244]: 2024-06-25 14:28:06.658 [INFO][4031] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" Namespace="calico-system" Pod="csi-node-driver-jm8qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jm8qv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jm8qv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eca06206-c460-41f2-8686-c513e245df74", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258", Pod:"csi-node-driver-jm8qv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4a44247f184", MAC:"fe:90:b7:7a:21:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:06.670377 containerd[1244]: 2024-06-25 14:28:06.666 [INFO][4031] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258" Namespace="calico-system" Pod="csi-node-driver-jm8qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:06.680000 audit[4072]: NETFILTER_CFG table=filter:114 family=2 entries=48 op=nft_register_chain pid=4072 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:28:06.680000 audit[4072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23868 a0=3 a1=ffffd0654fd0 a2=0 a3=ffffaedfbfa8 items=0 ppid=3322 pid=4072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:06.680000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:28:06.692369 containerd[1244]: time="2024-06-25T14:28:06.692275132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:28:06.692369 containerd[1244]: time="2024-06-25T14:28:06.692324692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:28:06.692369 containerd[1244]: time="2024-06-25T14:28:06.692354972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:28:06.692369 containerd[1244]: time="2024-06-25T14:28:06.692367452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:28:06.710534 systemd[1]: Started cri-containerd-4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258.scope - libcontainer container 4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258. Jun 25 14:28:06.720000 audit: BPF prog-id=156 op=LOAD Jun 25 14:28:06.720000 audit: BPF prog-id=157 op=LOAD Jun 25 14:28:06.720000 audit[4090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=4080 pid=4090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:06.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465303134373332316563613531633063616636313438313266343762 Jun 25 14:28:06.720000 audit: BPF prog-id=158 op=LOAD Jun 25 14:28:06.720000 audit[4090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=4080 pid=4090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:06.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465303134373332316563613531633063616636313438313266343762 Jun 25 14:28:06.721000 audit: BPF prog-id=158 op=UNLOAD Jun 25 14:28:06.721000 audit: BPF prog-id=157 op=UNLOAD Jun 25 14:28:06.721000 audit: BPF prog-id=159 op=LOAD Jun 25 14:28:06.721000 audit[4090]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=4080 pid=4090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:06.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465303134373332316563613531633063616636313438313266343762 Jun 25 14:28:06.722874 systemd-resolved[1185]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:28:06.739533 containerd[1244]: time="2024-06-25T14:28:06.739484209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm8qv,Uid:eca06206-c460-41f2-8686-c513e245df74,Namespace:calico-system,Attempt:1,} returns sandbox id \"4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258\"" Jun 25 14:28:07.155792 systemd-networkd[1082]: cali40851ec1a0d: Gained IPv6LL Jun 25 14:28:07.218742 systemd-networkd[1082]: cali606725fa246: Gained IPv6LL Jun 25 14:28:07.484360 containerd[1244]: time="2024-06-25T14:28:07.484296521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:07.485144 containerd[1244]: time="2024-06-25T14:28:07.485036108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 14:28:07.485842 containerd[1244]: time="2024-06-25T14:28:07.485800177Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:07.487671 containerd[1244]: time="2024-06-25T14:28:07.487627124Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:07.488980 containerd[1244]: time="2024-06-25T14:28:07.488941013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:07.490481 containerd[1244]: time="2024-06-25T14:28:07.490434748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.624443162s" Jun 25 14:28:07.490549 containerd[1244]: time="2024-06-25T14:28:07.490482430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 14:28:07.491301 containerd[1244]: time="2024-06-25T14:28:07.491261579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 14:28:07.498632 containerd[1244]: time="2024-06-25T14:28:07.498577210Z" level=info msg="CreateContainer within sandbox \"74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 14:28:07.513812 containerd[1244]: time="2024-06-25T14:28:07.513763773Z" level=info msg="CreateContainer within sandbox \"74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b2a65b197e8db50b6196c3864d063996769b551eb0349b2bf20ac5604f9b8d60\"" Jun 25 14:28:07.514334 containerd[1244]: time="2024-06-25T14:28:07.514233470Z" level=info msg="StartContainer for \"b2a65b197e8db50b6196c3864d063996769b551eb0349b2bf20ac5604f9b8d60\"" Jun 25 14:28:07.552513 systemd[1]: Started cri-containerd-b2a65b197e8db50b6196c3864d063996769b551eb0349b2bf20ac5604f9b8d60.scope - libcontainer container b2a65b197e8db50b6196c3864d063996769b551eb0349b2bf20ac5604f9b8d60. Jun 25 14:28:07.567000 audit: BPF prog-id=160 op=LOAD Jun 25 14:28:07.567000 audit: BPF prog-id=161 op=LOAD Jun 25 14:28:07.567000 audit[4133]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3909 pid=4133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:07.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613635623139376538646235306236313936633338363464303633 Jun 25 14:28:07.567000 audit: BPF prog-id=162 op=LOAD Jun 25 14:28:07.567000 audit[4133]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3909 pid=4133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:07.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613635623139376538646235306236313936633338363464303633 Jun 25 14:28:07.567000 audit: BPF prog-id=162 op=UNLOAD Jun 25 14:28:07.567000 audit: BPF prog-id=161 op=UNLOAD Jun 25 14:28:07.567000 audit: BPF prog-id=163 op=LOAD Jun 25 14:28:07.567000 audit[4133]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3909 pid=4133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:07.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613635623139376538646235306236313936633338363464303633 Jun 25 14:28:07.593544 containerd[1244]: time="2024-06-25T14:28:07.592950387Z" level=info msg="StartContainer for \"b2a65b197e8db50b6196c3864d063996769b551eb0349b2bf20ac5604f9b8d60\" returns successfully" Jun 25 14:28:07.598953 kubelet[2256]: E0625 14:28:07.598923 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:07.922755 systemd-networkd[1082]: cali4a44247f184: Gained IPv6LL Jun 25 14:28:08.488271 containerd[1244]: time="2024-06-25T14:28:08.488205220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:08.488701 containerd[1244]: time="2024-06-25T14:28:08.488658036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 14:28:08.489482 containerd[1244]: time="2024-06-25T14:28:08.489447544Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:08.490963 containerd[1244]: time="2024-06-25T14:28:08.490922798Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:08.492148 containerd[1244]: time="2024-06-25T14:28:08.492114400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:08.493039 containerd[1244]: time="2024-06-25T14:28:08.493006993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.001703972s" Jun 25 14:28:08.493156 containerd[1244]: time="2024-06-25T14:28:08.493132637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 14:28:08.495164 containerd[1244]: time="2024-06-25T14:28:08.495124389Z" level=info msg="CreateContainer within sandbox \"4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 14:28:08.507164 containerd[1244]: time="2024-06-25T14:28:08.507117461Z" level=info msg="CreateContainer within sandbox \"4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4d374236f8a0813ff03d1cc6889b72eeaced138e793c04b4872bd0e1e8534574\"" Jun 25 14:28:08.507563 containerd[1244]: time="2024-06-25T14:28:08.507533236Z" level=info msg="StartContainer for \"4d374236f8a0813ff03d1cc6889b72eeaced138e793c04b4872bd0e1e8534574\"" Jun 25 14:28:08.538503 systemd[1]: Started cri-containerd-4d374236f8a0813ff03d1cc6889b72eeaced138e793c04b4872bd0e1e8534574.scope - libcontainer container 4d374236f8a0813ff03d1cc6889b72eeaced138e793c04b4872bd0e1e8534574. Jun 25 14:28:08.549000 audit: BPF prog-id=164 op=LOAD Jun 25 14:28:08.551786 kernel: kauditd_printk_skb: 143 callbacks suppressed Jun 25 14:28:08.551875 kernel: audit: type=1334 audit(1719325688.549:625): prog-id=164 op=LOAD Jun 25 14:28:08.549000 audit[4175]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4080 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:08.554816 kernel: audit: type=1300 audit(1719325688.549:625): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4080 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:08.554889 kernel: audit: type=1327 audit(1719325688.549:625): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333734323336663861303831336666303364316363363838396237 Jun 25 14:28:08.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333734323336663861303831336666303364316363363838396237 Jun 25 14:28:08.550000 audit: BPF prog-id=165 op=LOAD Jun 25 14:28:08.558416 kernel: audit: type=1334 audit(1719325688.550:626): prog-id=165 op=LOAD Jun 25 14:28:08.558463 kernel: audit: type=1300 audit(1719325688.550:626): arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4080 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:08.550000 audit[4175]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4080 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:08.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333734323336663861303831336666303364316363363838396237 Jun 25 14:28:08.563677 kernel: audit: type=1327 audit(1719325688.550:626): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333734323336663861303831336666303364316363363838396237 Jun 25 14:28:08.563735 kernel: audit: type=1334 audit(1719325688.550:627): prog-id=165 op=UNLOAD Jun 25 14:28:08.550000 audit: BPF prog-id=165 op=UNLOAD Jun 25 14:28:08.550000 audit: BPF prog-id=164 op=UNLOAD Jun 25 14:28:08.564965 kernel: audit: type=1334 audit(1719325688.550:628): prog-id=164 op=UNLOAD Jun 25 14:28:08.565024 kernel: audit: type=1334 audit(1719325688.550:629): prog-id=166 op=LOAD Jun 25 14:28:08.550000 audit: BPF prog-id=166 op=LOAD Jun 25 14:28:08.550000 audit[4175]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4080 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:08.569371 kernel: audit: type=1300 audit(1719325688.550:629): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4080 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:08.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333734323336663861303831336666303364316363363838396237 Jun 25 14:28:08.580269 containerd[1244]: time="2024-06-25T14:28:08.580176613Z" level=info msg="StartContainer for \"4d374236f8a0813ff03d1cc6889b72eeaced138e793c04b4872bd0e1e8534574\" returns successfully" Jun 25 14:28:08.585152 containerd[1244]: time="2024-06-25T14:28:08.583167481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 14:28:08.601527 kubelet[2256]: E0625 14:28:08.601457 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:09.503298 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:58020.service - OpenSSH per-connection server daemon (10.0.0.1:58020). Jun 25 14:28:09.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:58020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:09.546000 audit[4203]: USER_ACCT pid=4203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:09.547156 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 58020 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:09.547000 audit[4203]: CRED_ACQ pid=4203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:09.548000 audit[4203]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee9e4460 a2=3 a3=1 items=0 ppid=1 pid=4203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:09.548000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:09.548982 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:09.554058 systemd-logind[1231]: New session 13 of user core. Jun 25 14:28:09.560555 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 14:28:09.564000 audit[4203]: USER_START pid=4203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:09.567000 audit[4205]: CRED_ACQ pid=4205 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:09.602656 kubelet[2256]: I0625 14:28:09.602615 2256 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:28:09.744339 containerd[1244]: time="2024-06-25T14:28:09.744283049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:09.745177 containerd[1244]: time="2024-06-25T14:28:09.745132399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 14:28:09.746293 containerd[1244]: time="2024-06-25T14:28:09.746241638Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:09.747887 containerd[1244]: time="2024-06-25T14:28:09.747850014Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:09.749705 containerd[1244]: time="2024-06-25T14:28:09.749662997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:09.750644 containerd[1244]: time="2024-06-25T14:28:09.750599830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.167391628s" Jun 25 14:28:09.750719 containerd[1244]: time="2024-06-25T14:28:09.750648032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 14:28:09.753504 containerd[1244]: time="2024-06-25T14:28:09.753388368Z" level=info msg="CreateContainer within sandbox \"4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 14:28:09.769492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2562489329.mount: Deactivated successfully. Jun 25 14:28:09.776062 containerd[1244]: time="2024-06-25T14:28:09.775999600Z" level=info msg="CreateContainer within sandbox \"4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"31d8d55fbb794fce0dc62cace986d4a71c7eb8cae675fda561dbb12bdc0608e7\"" Jun 25 14:28:09.776827 containerd[1244]: time="2024-06-25T14:28:09.776795148Z" level=info msg="StartContainer for \"31d8d55fbb794fce0dc62cace986d4a71c7eb8cae675fda561dbb12bdc0608e7\"" Jun 25 14:28:09.811567 sshd[4203]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:09.811626 systemd[1]: Started cri-containerd-31d8d55fbb794fce0dc62cace986d4a71c7eb8cae675fda561dbb12bdc0608e7.scope - libcontainer container 31d8d55fbb794fce0dc62cace986d4a71c7eb8cae675fda561dbb12bdc0608e7. Jun 25 14:28:09.812000 audit[4203]: USER_END pid=4203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:09.813000 audit[4203]: CRED_DISP pid=4203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:09.815145 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:58020.service: Deactivated successfully. Jun 25 14:28:09.815972 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 14:28:09.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:58020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:09.817003 systemd-logind[1231]: Session 13 logged out. Waiting for processes to exit. Jun 25 14:28:09.819383 systemd-logind[1231]: Removed session 13. Jun 25 14:28:09.828000 audit: BPF prog-id=167 op=LOAD Jun 25 14:28:09.828000 audit[4225]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001b18b0 a2=78 a3=0 items=0 ppid=4080 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:09.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643864353566626237393466636530646336326361636539383664 Jun 25 14:28:09.829000 audit: BPF prog-id=168 op=LOAD Jun 25 14:28:09.829000 audit[4225]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001b1640 a2=78 a3=0 items=0 ppid=4080 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:09.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643864353566626237393466636530646336326361636539383664 Jun 25 14:28:09.829000 audit: BPF prog-id=168 op=UNLOAD Jun 25 14:28:09.829000 audit: BPF prog-id=167 op=UNLOAD Jun 25 14:28:09.829000 audit: BPF prog-id=169 op=LOAD Jun 25 14:28:09.829000 audit[4225]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001b1b10 a2=78 a3=0 items=0 ppid=4080 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:09.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331643864353566626237393466636530646336326361636539383664 Jun 25 14:28:09.845024 containerd[1244]: time="2024-06-25T14:28:09.843457163Z" level=info msg="StartContainer for \"31d8d55fbb794fce0dc62cace986d4a71c7eb8cae675fda561dbb12bdc0608e7\" returns successfully" Jun 25 14:28:10.311397 kubelet[2256]: I0625 14:28:10.311061 2256 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:28:10.312260 kubelet[2256]: E0625 14:28:10.311958 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:10.464875 kubelet[2256]: I0625 14:28:10.464368 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7857d6897f-22rg5" podStartSLOduration=27.838927887 podCreationTimestamp="2024-06-25 14:27:41 +0000 UTC" firstStartedPulling="2024-06-25 14:28:05.865386546 +0000 UTC m=+44.535550283" lastFinishedPulling="2024-06-25 14:28:07.490768921 +0000 UTC m=+46.160932658" observedRunningTime="2024-06-25 14:28:08.615211635 +0000 UTC m=+47.285375492" watchObservedRunningTime="2024-06-25 14:28:10.464310262 +0000 UTC m=+49.134473999" Jun 25 14:28:10.533906 kubelet[2256]: I0625 14:28:10.533858 2256 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 14:28:10.534059 kubelet[2256]: I0625 14:28:10.533919 2256 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 14:28:10.607523 kubelet[2256]: E0625 14:28:10.607421 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:28:10.618702 kubelet[2256]: I0625 14:28:10.618184 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-jm8qv" podStartSLOduration=26.607930756000002 podCreationTimestamp="2024-06-25 14:27:41 +0000 UTC" firstStartedPulling="2024-06-25 14:28:06.740661254 +0000 UTC m=+45.410824991" lastFinishedPulling="2024-06-25 14:28:09.75087496 +0000 UTC m=+48.421038737" observedRunningTime="2024-06-25 14:28:10.617895894 +0000 UTC m=+49.288059631" watchObservedRunningTime="2024-06-25 14:28:10.618144502 +0000 UTC m=+49.288308239" Jun 25 14:28:10.765893 systemd[1]: run-containerd-runc-k8s.io-e7f7bd0c3abcb1c6e55dc99f9c7794b53864efa94aeb7947cd197883e078f3e8-runc.LbOA5n.mount: Deactivated successfully. Jun 25 14:28:14.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:58026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:14.828040 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:58026.service - OpenSSH per-connection server daemon (10.0.0.1:58026). Jun 25 14:28:14.828832 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 14:28:14.828891 kernel: audit: type=1130 audit(1719325694.827:644): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:58026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:14.863000 audit[4306]: USER_ACCT pid=4306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:14.864782 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 58026 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:14.866699 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:14.865000 audit[4306]: CRED_ACQ pid=4306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:14.868993 kernel: audit: type=1101 audit(1719325694.863:645): pid=4306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:14.869050 kernel: audit: type=1103 audit(1719325694.865:646): pid=4306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:14.870841 kernel: audit: type=1006 audit(1719325694.865:647): pid=4306 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jun 25 14:28:14.865000 audit[4306]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdd353310 a2=3 a3=1 items=0 ppid=1 pid=4306 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:14.873636 kernel: audit: type=1300 audit(1719325694.865:647): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdd353310 a2=3 a3=1 items=0 ppid=1 pid=4306 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:14.865000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:14.874715 kernel: audit: type=1327 audit(1719325694.865:647): proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:14.876204 systemd-logind[1231]: New session 14 of user core. Jun 25 14:28:14.883633 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 14:28:14.887000 audit[4306]: USER_START pid=4306 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:14.889000 audit[4308]: CRED_ACQ pid=4308 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:14.893557 kernel: audit: type=1105 audit(1719325694.887:648): pid=4306 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:14.893685 kernel: audit: type=1103 audit(1719325694.889:649): pid=4308 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:15.069621 sshd[4306]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:15.070000 audit[4306]: USER_END pid=4306 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:15.073400 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:58026.service: Deactivated successfully. Jun 25 14:28:15.074252 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 14:28:15.070000 audit[4306]: CRED_DISP pid=4306 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:15.075564 systemd-logind[1231]: Session 14 logged out. Waiting for processes to exit. Jun 25 14:28:15.077642 kernel: audit: type=1106 audit(1719325695.070:650): pid=4306 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:15.077756 kernel: audit: type=1104 audit(1719325695.070:651): pid=4306 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:15.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:58026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:15.078134 systemd-logind[1231]: Removed session 14. Jun 25 14:28:15.263982 kubelet[2256]: I0625 14:28:15.263937 2256 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:28:15.287742 systemd[1]: run-containerd-runc-k8s.io-b2a65b197e8db50b6196c3864d063996769b551eb0349b2bf20ac5604f9b8d60-runc.Wh6Axc.mount: Deactivated successfully. Jun 25 14:28:16.281857 systemd[1]: run-containerd-runc-k8s.io-b2a65b197e8db50b6196c3864d063996769b551eb0349b2bf20ac5604f9b8d60-runc.WZG1aB.mount: Deactivated successfully. Jun 25 14:28:18.194000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:18.194000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=400232bce0 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:28:18.194000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:28:18.203000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6273 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:18.203000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4002279aa0 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:28:18.203000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:28:18.365000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:18.365000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=72 a1=4007159580 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:28:18.365000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:28:18.365000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6273 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:18.365000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=73 a1=400f813410 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:28:18.365000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:28:18.365000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:18.365000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=72 a1=4013586c60 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:28:18.365000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:28:18.371000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6275 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:18.371000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=72 a1=400f7f34d0 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:28:18.371000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:28:18.394000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:18.394000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=72 a1=4007159700 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:28:18.394000 audit[2134]: AVC avc: denied { watch } for pid=2134 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6273 scontext=system_u:system_r:container_t:s0:c81,c1000 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:18.394000 audit[2134]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=73 a1=400f7f3500 a2=fc6 a3=0 items=0 ppid=1973 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c81,c1000 key=(null) Jun 25 14:28:18.394000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:28:18.394000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3835002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 14:28:20.080696 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:43152.service - OpenSSH per-connection server daemon (10.0.0.1:43152). Jun 25 14:28:20.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:43152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:20.083571 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 14:28:20.083667 kernel: audit: type=1130 audit(1719325700.080:661): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:43152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:20.113000 audit[4376]: USER_ACCT pid=4376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:20.114157 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 43152 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:20.115562 sshd[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:20.114000 audit[4376]: CRED_ACQ pid=4376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:20.118691 kernel: audit: type=1101 audit(1719325700.113:662): pid=4376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:20.118763 kernel: audit: type=1103 audit(1719325700.114:663): pid=4376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:20.118784 kernel: audit: type=1006 audit(1719325700.114:664): pid=4376 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 14:28:20.120134 kernel: audit: type=1300 audit(1719325700.114:664): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd485bde0 a2=3 a3=1 items=0 ppid=1 pid=4376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:20.114000 audit[4376]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd485bde0 a2=3 a3=1 items=0 ppid=1 pid=4376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:20.123885 kernel: audit: type=1327 audit(1719325700.114:664): proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:20.114000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:20.124144 systemd-logind[1231]: New session 15 of user core. Jun 25 14:28:20.138649 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 14:28:20.144000 audit[4376]: USER_START pid=4376 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:20.146000 audit[4378]: CRED_ACQ pid=4378 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:20.150328 kernel: audit: type=1105 audit(1719325700.144:665): pid=4376 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:20.150423 kernel: audit: type=1103 audit(1719325700.146:666): pid=4378 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:20.320000 audit[4388]: NETFILTER_CFG table=filter:115 family=2 entries=9 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:20.320000 audit[4388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff58825b0 a2=0 a3=1 items=0 ppid=2430 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:20.327537 kernel: audit: type=1325 audit(1719325700.320:667): table=filter:115 family=2 entries=9 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:20.327644 kernel: audit: type=1300 audit(1719325700.320:667): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff58825b0 a2=0 a3=1 items=0 ppid=2430 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:20.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:20.340880 kubelet[2256]: I0625 14:28:20.340739 2256 topology_manager.go:215] "Topology Admit Handler" podUID="272d8c83-af8c-48b5-aec6-c325e60495b7" podNamespace="calico-apiserver" podName="calico-apiserver-59f66fc9b4-x8kj8" Jun 25 14:28:20.322000 audit[4388]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:20.322000 audit[4388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff58825b0 a2=0 a3=1 items=0 ppid=2430 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:20.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:20.347430 systemd[1]: Created slice kubepods-besteffort-pod272d8c83_af8c_48b5_aec6_c325e60495b7.slice - libcontainer container kubepods-besteffort-pod272d8c83_af8c_48b5_aec6_c325e60495b7.slice. Jun 25 14:28:20.351385 sshd[4376]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:20.352000 audit[4376]: USER_END pid=4376 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:20.352000 audit[4376]: CRED_DISP pid=4376 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:20.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:43152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:20.355753 systemd-logind[1231]: Session 15 logged out. Waiting for processes to exit. Jun 25 14:28:20.356050 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:43152.service: Deactivated successfully. Jun 25 14:28:20.356956 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 14:28:20.357868 systemd-logind[1231]: Removed session 15. Jun 25 14:28:20.367000 audit[4391]: NETFILTER_CFG table=filter:117 family=2 entries=10 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:20.367000 audit[4391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffdec1f940 a2=0 a3=1 items=0 ppid=2430 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:20.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:20.367000 audit[4391]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:20.367000 audit[4391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdec1f940 a2=0 a3=1 items=0 ppid=2430 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:20.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:20.384805 kubelet[2256]: I0625 14:28:20.384746 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/272d8c83-af8c-48b5-aec6-c325e60495b7-calico-apiserver-certs\") pod \"calico-apiserver-59f66fc9b4-x8kj8\" (UID: \"272d8c83-af8c-48b5-aec6-c325e60495b7\") " pod="calico-apiserver/calico-apiserver-59f66fc9b4-x8kj8" Jun 25 14:28:20.384805 kubelet[2256]: I0625 14:28:20.384807 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt2wq\" (UniqueName: \"kubernetes.io/projected/272d8c83-af8c-48b5-aec6-c325e60495b7-kube-api-access-mt2wq\") pod \"calico-apiserver-59f66fc9b4-x8kj8\" (UID: \"272d8c83-af8c-48b5-aec6-c325e60495b7\") " pod="calico-apiserver/calico-apiserver-59f66fc9b4-x8kj8" Jun 25 14:28:20.490831 kubelet[2256]: E0625 14:28:20.490780 2256 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 14:28:20.504372 kubelet[2256]: E0625 14:28:20.504258 2256 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272d8c83-af8c-48b5-aec6-c325e60495b7-calico-apiserver-certs podName:272d8c83-af8c-48b5-aec6-c325e60495b7 nodeName:}" failed. No retries permitted until 2024-06-25 14:28:20.993762491 +0000 UTC m=+59.663926228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/272d8c83-af8c-48b5-aec6-c325e60495b7-calico-apiserver-certs") pod "calico-apiserver-59f66fc9b4-x8kj8" (UID: "272d8c83-af8c-48b5-aec6-c325e60495b7") : secret "calico-apiserver-certs" not found Jun 25 14:28:21.089490 kubelet[2256]: E0625 14:28:21.089444 2256 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 14:28:21.089663 kubelet[2256]: E0625 14:28:21.089523 2256 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272d8c83-af8c-48b5-aec6-c325e60495b7-calico-apiserver-certs podName:272d8c83-af8c-48b5-aec6-c325e60495b7 nodeName:}" failed. No retries permitted until 2024-06-25 14:28:22.089506832 +0000 UTC m=+60.759670569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/272d8c83-af8c-48b5-aec6-c325e60495b7-calico-apiserver-certs") pod "calico-apiserver-59f66fc9b4-x8kj8" (UID: "272d8c83-af8c-48b5-aec6-c325e60495b7") : secret "calico-apiserver-certs" not found Jun 25 14:28:21.412652 containerd[1244]: time="2024-06-25T14:28:21.412530464Z" level=info msg="StopPodSandbox for \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\"" Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.467 [WARNING][4409] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--758q6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"fbf7d824-77fc-4d35-a17d-65edab6216f5", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e", Pod:"coredns-5dd5756b68-758q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3fcef45ffd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.468 [INFO][4409] k8s.go 608: Cleaning up netns ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.468 [INFO][4409] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" iface="eth0" netns="" Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.468 [INFO][4409] k8s.go 615: Releasing IP address(es) ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.468 [INFO][4409] utils.go 188: Calico CNI releasing IP address ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.490 [INFO][4419] ipam_plugin.go 411: Releasing address using handleID ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" HandleID="k8s-pod-network.f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.490 [INFO][4419] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.490 [INFO][4419] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.502 [WARNING][4419] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" HandleID="k8s-pod-network.f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.503 [INFO][4419] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" HandleID="k8s-pod-network.f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.506 [INFO][4419] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:21.515920 containerd[1244]: 2024-06-25 14:28:21.514 [INFO][4409] k8s.go 621: Teardown processing complete. ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:21.516526 containerd[1244]: time="2024-06-25T14:28:21.515966434Z" level=info msg="TearDown network for sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\" successfully" Jun 25 14:28:21.516526 containerd[1244]: time="2024-06-25T14:28:21.515997835Z" level=info msg="StopPodSandbox for \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\" returns successfully" Jun 25 14:28:21.516845 containerd[1244]: time="2024-06-25T14:28:21.516813335Z" level=info msg="RemovePodSandbox for \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\"" Jun 25 14:28:21.520047 containerd[1244]: time="2024-06-25T14:28:21.516854896Z" level=info msg="Forcibly stopping sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\"" Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.573 [WARNING][4443] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--758q6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"fbf7d824-77fc-4d35-a17d-65edab6216f5", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee0acb10438dc5d1380e33e4b9e92b0191bd9881b18b932fb6d6c4b0e3fd713e", Pod:"coredns-5dd5756b68-758q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3fcef45ffd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.574 [INFO][4443] k8s.go 608: Cleaning up netns ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.574 [INFO][4443] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" iface="eth0" netns="" Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.574 [INFO][4443] k8s.go 615: Releasing IP address(es) ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.574 [INFO][4443] utils.go 188: Calico CNI releasing IP address ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.597 [INFO][4450] ipam_plugin.go 411: Releasing address using handleID ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" HandleID="k8s-pod-network.f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.597 [INFO][4450] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.597 [INFO][4450] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.607 [WARNING][4450] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" HandleID="k8s-pod-network.f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.607 [INFO][4450] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" HandleID="k8s-pod-network.f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Workload="localhost-k8s-coredns--5dd5756b68--758q6-eth0" Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.610 [INFO][4450] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:21.614742 containerd[1244]: 2024-06-25 14:28:21.612 [INFO][4443] k8s.go 621: Teardown processing complete. ContainerID="f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828" Jun 25 14:28:21.615196 containerd[1244]: time="2024-06-25T14:28:21.614776688Z" level=info msg="TearDown network for sandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\" successfully" Jun 25 14:28:21.628272 containerd[1244]: time="2024-06-25T14:28:21.628226587Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:28:21.628449 containerd[1244]: time="2024-06-25T14:28:21.628303229Z" level=info msg="RemovePodSandbox \"f45e5e32d79f48d7154d13dacc129ed527c9722687d590d53a5effc9a9186828\" returns successfully" Jun 25 14:28:21.628909 containerd[1244]: time="2024-06-25T14:28:21.628881524Z" level=info msg="StopPodSandbox for \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\"" Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.675 [WARNING][4472] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jm8qv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eca06206-c460-41f2-8686-c513e245df74", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258", Pod:"csi-node-driver-jm8qv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4a44247f184", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.676 [INFO][4472] k8s.go 608: Cleaning up netns ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.676 [INFO][4472] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" iface="eth0" netns="" Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.676 [INFO][4472] k8s.go 615: Releasing IP address(es) ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.676 [INFO][4472] utils.go 188: Calico CNI releasing IP address ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.698 [INFO][4480] ipam_plugin.go 411: Releasing address using handleID ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" HandleID="k8s-pod-network.c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.698 [INFO][4480] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.698 [INFO][4480] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.709 [WARNING][4480] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" HandleID="k8s-pod-network.c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.709 [INFO][4480] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" HandleID="k8s-pod-network.c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.714 [INFO][4480] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:21.718703 containerd[1244]: 2024-06-25 14:28:21.716 [INFO][4472] k8s.go 621: Teardown processing complete. ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:21.718703 containerd[1244]: time="2024-06-25T14:28:21.718664989Z" level=info msg="TearDown network for sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\" successfully" Jun 25 14:28:21.718703 containerd[1244]: time="2024-06-25T14:28:21.718696390Z" level=info msg="StopPodSandbox for \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\" returns successfully" Jun 25 14:28:21.719276 containerd[1244]: time="2024-06-25T14:28:21.719196323Z" level=info msg="RemovePodSandbox for \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\"" Jun 25 14:28:21.719276 containerd[1244]: time="2024-06-25T14:28:21.719231284Z" level=info msg="Forcibly stopping sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\"" Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.763 [WARNING][4501] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jm8qv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eca06206-c460-41f2-8686-c513e245df74", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e0147321eca51c0caf614812f47b7ea8f8264763c663061169e23a5e6b7e258", Pod:"csi-node-driver-jm8qv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali4a44247f184", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.763 [INFO][4501] k8s.go 608: Cleaning up netns ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.763 [INFO][4501] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" iface="eth0" netns="" Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.763 [INFO][4501] k8s.go 615: Releasing IP address(es) ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.763 [INFO][4501] utils.go 188: Calico CNI releasing IP address ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.784 [INFO][4508] ipam_plugin.go 411: Releasing address using handleID ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" HandleID="k8s-pod-network.c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.784 [INFO][4508] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.784 [INFO][4508] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.793 [WARNING][4508] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" HandleID="k8s-pod-network.c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.793 [INFO][4508] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" HandleID="k8s-pod-network.c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Workload="localhost-k8s-csi--node--driver--jm8qv-eth0" Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.795 [INFO][4508] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:21.798688 containerd[1244]: 2024-06-25 14:28:21.796 [INFO][4501] k8s.go 621: Teardown processing complete. ContainerID="c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e" Jun 25 14:28:21.803075 containerd[1244]: time="2024-06-25T14:28:21.798716129Z" level=info msg="TearDown network for sandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\" successfully" Jun 25 14:28:21.826699 containerd[1244]: time="2024-06-25T14:28:21.826636594Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:28:21.826882 containerd[1244]: time="2024-06-25T14:28:21.826718156Z" level=info msg="RemovePodSandbox \"c32c038bc61fc07dbb7b3b08397c69f303531beae461538de12db7d7bd4c714e\" returns successfully" Jun 25 14:28:21.827395 containerd[1244]: time="2024-06-25T14:28:21.827337052Z" level=info msg="StopPodSandbox for \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\"" Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.865 [WARNING][4530] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--68qx4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e7c7496b-13ed-42c4-b1e1-6a2ce57313f1", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876", Pod:"coredns-5dd5756b68-68qx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali606725fa246", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.866 [INFO][4530] k8s.go 608: Cleaning up netns ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.866 [INFO][4530] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" iface="eth0" netns="" Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.866 [INFO][4530] k8s.go 615: Releasing IP address(es) ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.866 [INFO][4530] utils.go 188: Calico CNI releasing IP address ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.887 [INFO][4538] ipam_plugin.go 411: Releasing address using handleID ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" HandleID="k8s-pod-network.e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.887 [INFO][4538] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.887 [INFO][4538] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.898 [WARNING][4538] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" HandleID="k8s-pod-network.e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.898 [INFO][4538] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" HandleID="k8s-pod-network.e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.900 [INFO][4538] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:21.903846 containerd[1244]: 2024-06-25 14:28:21.901 [INFO][4530] k8s.go 621: Teardown processing complete. ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:21.904278 containerd[1244]: time="2024-06-25T14:28:21.903894904Z" level=info msg="TearDown network for sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\" successfully" Jun 25 14:28:21.904278 containerd[1244]: time="2024-06-25T14:28:21.903925905Z" level=info msg="StopPodSandbox for \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\" returns successfully" Jun 25 14:28:21.904524 containerd[1244]: time="2024-06-25T14:28:21.904478678Z" level=info msg="RemovePodSandbox for \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\"" Jun 25 14:28:21.904565 containerd[1244]: time="2024-06-25T14:28:21.904525880Z" level=info msg="Forcibly stopping sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\"" Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.947 [WARNING][4562] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--68qx4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e7c7496b-13ed-42c4-b1e1-6a2ce57313f1", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97e09a5f93d8003d6e3c19038848bcf629131852c899c0f5a8fc73a3c2891876", Pod:"coredns-5dd5756b68-68qx4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali606725fa246", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.948 [INFO][4562] k8s.go 608: Cleaning up netns ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.948 [INFO][4562] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" iface="eth0" netns="" Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.948 [INFO][4562] k8s.go 615: Releasing IP address(es) ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.948 [INFO][4562] utils.go 188: Calico CNI releasing IP address ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.969 [INFO][4571] ipam_plugin.go 411: Releasing address using handleID ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" HandleID="k8s-pod-network.e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.970 [INFO][4571] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.970 [INFO][4571] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.978 [WARNING][4571] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" HandleID="k8s-pod-network.e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.978 [INFO][4571] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" HandleID="k8s-pod-network.e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Workload="localhost-k8s-coredns--5dd5756b68--68qx4-eth0" Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.980 [INFO][4571] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:21.984938 containerd[1244]: 2024-06-25 14:28:21.981 [INFO][4562] k8s.go 621: Teardown processing complete. ContainerID="e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df" Jun 25 14:28:21.984938 containerd[1244]: time="2024-06-25T14:28:21.984900428Z" level=info msg="TearDown network for sandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\" successfully" Jun 25 14:28:21.995218 containerd[1244]: time="2024-06-25T14:28:21.995168927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:28:21.995437 containerd[1244]: time="2024-06-25T14:28:21.995411373Z" level=info msg="RemovePodSandbox \"e8aa423b3ba7c878070f4ac041d1bb336e05bc5c1abc3345a6f1e4dd60f624df\" returns successfully" Jun 25 14:28:21.996138 containerd[1244]: time="2024-06-25T14:28:21.995984068Z" level=info msg="StopPodSandbox for \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\"" Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.036 [WARNING][4594] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0", GenerateName:"calico-kube-controllers-7857d6897f-", Namespace:"calico-system", SelfLink:"", UID:"2dba4c0d-dc07-4d6e-a4e1-19d948f912fa", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7857d6897f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169", Pod:"calico-kube-controllers-7857d6897f-22rg5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali40851ec1a0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.036 [INFO][4594] k8s.go 608: Cleaning up netns ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.036 [INFO][4594] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" iface="eth0" netns="" Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.036 [INFO][4594] k8s.go 615: Releasing IP address(es) ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.036 [INFO][4594] utils.go 188: Calico CNI releasing IP address ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.058 [INFO][4601] ipam_plugin.go 411: Releasing address using handleID ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" HandleID="k8s-pod-network.fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.059 [INFO][4601] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.059 [INFO][4601] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.068 [WARNING][4601] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" HandleID="k8s-pod-network.fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.068 [INFO][4601] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" HandleID="k8s-pod-network.fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.069 [INFO][4601] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:22.073827 containerd[1244]: 2024-06-25 14:28:22.072 [INFO][4594] k8s.go 621: Teardown processing complete. ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:22.074286 containerd[1244]: time="2024-06-25T14:28:22.073882426Z" level=info msg="TearDown network for sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\" successfully" Jun 25 14:28:22.074286 containerd[1244]: time="2024-06-25T14:28:22.073914186Z" level=info msg="StopPodSandbox for \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\" returns successfully" Jun 25 14:28:22.074605 containerd[1244]: time="2024-06-25T14:28:22.074551562Z" level=info msg="RemovePodSandbox for \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\"" Jun 25 14:28:22.074686 containerd[1244]: time="2024-06-25T14:28:22.074620084Z" level=info msg="Forcibly stopping sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\"" Jun 25 14:28:22.150600 containerd[1244]: time="2024-06-25T14:28:22.150545310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f66fc9b4-x8kj8,Uid:272d8c83-af8c-48b5-aec6-c325e60495b7,Namespace:calico-apiserver,Attempt:0,}" Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.118 [WARNING][4623] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0", GenerateName:"calico-kube-controllers-7857d6897f-", Namespace:"calico-system", SelfLink:"", UID:"2dba4c0d-dc07-4d6e-a4e1-19d948f912fa", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7857d6897f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74db251f45031fe4025f1cabae7f1c5d25d8112d62314e23b2cb34191266c169", Pod:"calico-kube-controllers-7857d6897f-22rg5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali40851ec1a0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.118 [INFO][4623] k8s.go 608: Cleaning up netns ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.118 [INFO][4623] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" iface="eth0" netns="" Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.118 [INFO][4623] k8s.go 615: Releasing IP address(es) ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.119 [INFO][4623] utils.go 188: Calico CNI releasing IP address ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.141 [INFO][4633] ipam_plugin.go 411: Releasing address using handleID ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" HandleID="k8s-pod-network.fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.141 [INFO][4633] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.141 [INFO][4633] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.150 [WARNING][4633] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" HandleID="k8s-pod-network.fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.150 [INFO][4633] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" HandleID="k8s-pod-network.fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Workload="localhost-k8s-calico--kube--controllers--7857d6897f--22rg5-eth0" Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.154 [INFO][4633] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:22.159408 containerd[1244]: 2024-06-25 14:28:22.156 [INFO][4623] k8s.go 621: Teardown processing complete. ContainerID="fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64" Jun 25 14:28:22.160572 containerd[1244]: time="2024-06-25T14:28:22.159444488Z" level=info msg="TearDown network for sandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\" successfully" Jun 25 14:28:22.165703 containerd[1244]: time="2024-06-25T14:28:22.165659961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:28:22.165964 containerd[1244]: time="2024-06-25T14:28:22.165934488Z" level=info msg="RemovePodSandbox \"fb2bd6e33dd8e2199d0cda9ae672cdfb06cbc7919c7fc9666dce81f077682b64\" returns successfully" Jun 25 14:28:22.314271 systemd-networkd[1082]: caliced31c40c50: Link UP Jun 25 14:28:22.316445 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:28:22.316554 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliced31c40c50: link becomes ready Jun 25 14:28:22.317191 systemd-networkd[1082]: caliced31c40c50: Gained carrier Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.224 [INFO][4640] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0 calico-apiserver-59f66fc9b4- calico-apiserver 272d8c83-af8c-48b5-aec6-c325e60495b7 981 0 2024-06-25 14:28:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f66fc9b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59f66fc9b4-x8kj8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliced31c40c50 [] []}} ContainerID="15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" Namespace="calico-apiserver" Pod="calico-apiserver-59f66fc9b4-x8kj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.224 [INFO][4640] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" Namespace="calico-apiserver" Pod="calico-apiserver-59f66fc9b4-x8kj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.259 [INFO][4653] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" HandleID="k8s-pod-network.15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" Workload="localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.273 [INFO][4653] ipam_plugin.go 264: Auto assigning IP ContainerID="15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" HandleID="k8s-pod-network.15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" Workload="localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9d50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59f66fc9b4-x8kj8", "timestamp":"2024-06-25 14:28:22.259238141 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.273 [INFO][4653] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.273 [INFO][4653] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.273 [INFO][4653] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.275 [INFO][4653] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" host="localhost" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.281 [INFO][4653] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.286 [INFO][4653] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.291 [INFO][4653] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.296 [INFO][4653] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.296 [INFO][4653] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" host="localhost" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.298 [INFO][4653] ipam.go 1685: Creating new handle: k8s-pod-network.15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.302 [INFO][4653] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" host="localhost" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.310 [INFO][4653] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" host="localhost" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.310 [INFO][4653] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" host="localhost" Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.310 [INFO][4653] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:28:22.336452 containerd[1244]: 2024-06-25 14:28:22.310 [INFO][4653] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" HandleID="k8s-pod-network.15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" Workload="localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0" Jun 25 14:28:22.337034 containerd[1244]: 2024-06-25 14:28:22.312 [INFO][4640] k8s.go 386: Populated endpoint ContainerID="15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" Namespace="calico-apiserver" Pod="calico-apiserver-59f66fc9b4-x8kj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0", GenerateName:"calico-apiserver-59f66fc9b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"272d8c83-af8c-48b5-aec6-c325e60495b7", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f66fc9b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59f66fc9b4-x8kj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliced31c40c50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:22.337034 containerd[1244]: 2024-06-25 14:28:22.312 [INFO][4640] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" Namespace="calico-apiserver" Pod="calico-apiserver-59f66fc9b4-x8kj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0" Jun 25 14:28:22.337034 containerd[1244]: 2024-06-25 14:28:22.312 [INFO][4640] dataplane_linux.go 68: Setting the host side veth name to caliced31c40c50 ContainerID="15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" Namespace="calico-apiserver" Pod="calico-apiserver-59f66fc9b4-x8kj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0" Jun 25 14:28:22.337034 containerd[1244]: 2024-06-25 14:28:22.316 [INFO][4640] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" Namespace="calico-apiserver" Pod="calico-apiserver-59f66fc9b4-x8kj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0" Jun 25 14:28:22.337034 containerd[1244]: 2024-06-25 14:28:22.320 [INFO][4640] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" Namespace="calico-apiserver" Pod="calico-apiserver-59f66fc9b4-x8kj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0", GenerateName:"calico-apiserver-59f66fc9b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"272d8c83-af8c-48b5-aec6-c325e60495b7", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f66fc9b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c", Pod:"calico-apiserver-59f66fc9b4-x8kj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliced31c40c50", MAC:"92:93:bc:7b:39:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:28:22.337034 containerd[1244]: 2024-06-25 14:28:22.330 [INFO][4640] k8s.go 500: Wrote updated endpoint to datastore ContainerID="15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c" Namespace="calico-apiserver" Pod="calico-apiserver-59f66fc9b4-x8kj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f66fc9b4--x8kj8-eth0" Jun 25 14:28:22.347000 audit[4677]: NETFILTER_CFG table=filter:119 family=2 entries=51 op=nft_register_chain pid=4677 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:28:22.347000 audit[4677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26260 a0=3 a1=ffffdf33e340 a2=0 a3=ffffaa695fa8 items=0 ppid=3322 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:22.347000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:28:22.370246 containerd[1244]: time="2024-06-25T14:28:22.370124546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:28:22.370480 containerd[1244]: time="2024-06-25T14:28:22.370188548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:28:22.370480 containerd[1244]: time="2024-06-25T14:28:22.370444874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:28:22.370605 containerd[1244]: time="2024-06-25T14:28:22.370465434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:28:22.408584 systemd[1]: Started cri-containerd-15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c.scope - libcontainer container 15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c. Jun 25 14:28:22.429000 audit: BPF prog-id=170 op=LOAD Jun 25 14:28:22.430000 audit: BPF prog-id=171 op=LOAD Jun 25 14:28:22.430000 audit[4696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=4686 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:22.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135663234363764393932383266653834633435626134316338306164 Jun 25 14:28:22.430000 audit: BPF prog-id=172 op=LOAD Jun 25 14:28:22.430000 audit[4696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=4686 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:22.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135663234363764393932383266653834633435626134316338306164 Jun 25 14:28:22.430000 audit: BPF prog-id=172 op=UNLOAD Jun 25 14:28:22.430000 audit: BPF prog-id=171 op=UNLOAD Jun 25 14:28:22.430000 audit: BPF prog-id=173 op=LOAD Jun 25 14:28:22.430000 audit[4696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=4686 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:22.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135663234363764393932383266653834633435626134316338306164 Jun 25 14:28:22.432918 systemd-resolved[1185]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:28:22.459585 containerd[1244]: time="2024-06-25T14:28:22.459535783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f66fc9b4-x8kj8,Uid:272d8c83-af8c-48b5-aec6-c325e60495b7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c\"" Jun 25 14:28:22.462492 containerd[1244]: time="2024-06-25T14:28:22.461387789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 14:28:24.079832 containerd[1244]: time="2024-06-25T14:28:24.079770904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 14:28:24.080156 containerd[1244]: time="2024-06-25T14:28:24.079899107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:24.082656 containerd[1244]: time="2024-06-25T14:28:24.082612130Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:24.083467 containerd[1244]: time="2024-06-25T14:28:24.083439790Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:24.086281 containerd[1244]: time="2024-06-25T14:28:24.086050650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:28:24.087373 containerd[1244]: time="2024-06-25T14:28:24.087331480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 1.62590309s" Jun 25 14:28:24.087488 containerd[1244]: time="2024-06-25T14:28:24.087374441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 14:28:24.089601 containerd[1244]: time="2024-06-25T14:28:24.089562612Z" level=info msg="CreateContainer within sandbox \"15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 14:28:24.102905 containerd[1244]: time="2024-06-25T14:28:24.102847882Z" level=info msg="CreateContainer within sandbox \"15f2467d99282fe84c45ba41c80ad6041dc44136cefa2a71e9ec84167c58ca6c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"28dd0470735add26a9e2d74a80d41e019c1e9f095ae34ebee9384971b5319445\"" Jun 25 14:28:24.104533 containerd[1244]: time="2024-06-25T14:28:24.104498201Z" level=info msg="StartContainer for \"28dd0470735add26a9e2d74a80d41e019c1e9f095ae34ebee9384971b5319445\"" Jun 25 14:28:24.144811 systemd[1]: run-containerd-runc-k8s.io-28dd0470735add26a9e2d74a80d41e019c1e9f095ae34ebee9384971b5319445-runc.U5sU1q.mount: Deactivated successfully. Jun 25 14:28:24.159583 systemd[1]: Started cri-containerd-28dd0470735add26a9e2d74a80d41e019c1e9f095ae34ebee9384971b5319445.scope - libcontainer container 28dd0470735add26a9e2d74a80d41e019c1e9f095ae34ebee9384971b5319445. Jun 25 14:28:24.170000 audit: BPF prog-id=174 op=LOAD Jun 25 14:28:24.170000 audit: BPF prog-id=175 op=LOAD Jun 25 14:28:24.170000 audit[4736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4686 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:24.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238646430343730373335616464323661396532643734613830643431 Jun 25 14:28:24.171000 audit: BPF prog-id=176 op=LOAD Jun 25 14:28:24.171000 audit[4736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4686 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:24.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238646430343730373335616464323661396532643734613830643431 Jun 25 14:28:24.171000 audit: BPF prog-id=176 op=UNLOAD Jun 25 14:28:24.171000 audit: BPF prog-id=175 op=UNLOAD Jun 25 14:28:24.171000 audit: BPF prog-id=177 op=LOAD Jun 25 14:28:24.171000 audit[4736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4686 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:24.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238646430343730373335616464323661396532643734613830643431 Jun 25 14:28:24.216919 containerd[1244]: time="2024-06-25T14:28:24.216866981Z" level=info msg="StartContainer for \"28dd0470735add26a9e2d74a80d41e019c1e9f095ae34ebee9384971b5319445\" returns successfully" Jun 25 14:28:24.306535 systemd-networkd[1082]: caliced31c40c50: Gained IPv6LL Jun 25 14:28:24.659993 kubelet[2256]: I0625 14:28:24.659953 2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59f66fc9b4-x8kj8" podStartSLOduration=3.032924955 podCreationTimestamp="2024-06-25 14:28:20 +0000 UTC" firstStartedPulling="2024-06-25 14:28:22.460767934 +0000 UTC m=+61.130931671" lastFinishedPulling="2024-06-25 14:28:24.08775537 +0000 UTC m=+62.757919107" observedRunningTime="2024-06-25 14:28:24.658678163 +0000 UTC m=+63.328842020" watchObservedRunningTime="2024-06-25 14:28:24.659912391 +0000 UTC m=+63.330076128" Jun 25 14:28:24.671000 audit[4769]: NETFILTER_CFG table=filter:120 family=2 entries=10 op=nft_register_rule pid=4769 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:24.671000 audit[4769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff0de1e60 a2=0 a3=1 items=0 ppid=2430 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:24.671000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:24.672000 audit[4769]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=4769 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:24.672000 audit[4769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff0de1e60 a2=0 a3=1 items=0 ppid=2430 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:24.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:24.897000 audit[4771]: NETFILTER_CFG table=filter:122 family=2 entries=10 op=nft_register_rule pid=4771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:24.897000 audit[4771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffcee5f8e0 a2=0 a3=1 items=0 ppid=2430 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:24.897000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:24.898000 audit[4771]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=4771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:24.898000 audit[4771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcee5f8e0 a2=0 a3=1 items=0 ppid=2430 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:24.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:25.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:43166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:25.362741 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:43166.service - OpenSSH per-connection server daemon (10.0.0.1:43166). Jun 25 14:28:25.363613 kernel: kauditd_printk_skb: 52 callbacks suppressed Jun 25 14:28:25.363714 kernel: audit: type=1130 audit(1719325705.361:691): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:43166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:25.396000 audit[4773]: USER_ACCT pid=4773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.398190 sshd[4773]: Accepted publickey for core from 10.0.0.1 port 43166 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:25.399539 sshd[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:25.397000 audit[4773]: CRED_ACQ pid=4773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.404724 kernel: audit: type=1101 audit(1719325705.396:692): pid=4773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.404829 kernel: audit: type=1103 audit(1719325705.397:693): pid=4773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.404849 kernel: audit: type=1006 audit(1719325705.397:694): pid=4773 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 14:28:25.404867 kernel: audit: type=1300 audit(1719325705.397:694): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffdd46f20 a2=3 a3=1 items=0 ppid=1 pid=4773 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:25.397000 audit[4773]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffdd46f20 a2=3 a3=1 items=0 ppid=1 pid=4773 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:25.407445 kernel: audit: type=1327 audit(1719325705.397:694): proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:25.397000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:25.406595 systemd-logind[1231]: New session 16 of user core. Jun 25 14:28:25.418659 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 14:28:25.428000 audit[4773]: USER_START pid=4773 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.434361 kernel: audit: type=1105 audit(1719325705.428:695): pid=4773 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.436000 audit[4775]: CRED_ACQ pid=4775 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.441695 kernel: audit: type=1103 audit(1719325705.436:696): pid=4775 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.659218 sshd[4773]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:25.660000 audit[4773]: USER_END pid=4773 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.662000 audit[4773]: CRED_DISP pid=4773 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.667717 kernel: audit: type=1106 audit(1719325705.660:697): pid=4773 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.667799 kernel: audit: type=1104 audit(1719325705.662:698): pid=4773 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.671812 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:43166.service: Deactivated successfully. Jun 25 14:28:25.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:43166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:25.672509 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 14:28:25.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:43180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:25.675158 systemd-logind[1231]: Session 16 logged out. Waiting for processes to exit. Jun 25 14:28:25.675450 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:43180.service - OpenSSH per-connection server daemon (10.0.0.1:43180). Jun 25 14:28:25.677023 systemd-logind[1231]: Removed session 16. Jun 25 14:28:25.703000 audit[4786]: USER_ACCT pid=4786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.705568 sshd[4786]: Accepted publickey for core from 10.0.0.1 port 43180 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:25.705000 audit[4786]: CRED_ACQ pid=4786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.705000 audit[4786]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb1e6830 a2=3 a3=1 items=0 ppid=1 pid=4786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:25.705000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:25.707124 sshd[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:25.711618 systemd-logind[1231]: New session 17 of user core. Jun 25 14:28:25.723757 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 14:28:25.726000 audit[4786]: USER_START pid=4786 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.728000 audit[4788]: CRED_ACQ pid=4788 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:25.911000 audit[4796]: NETFILTER_CFG table=filter:124 family=2 entries=9 op=nft_register_rule pid=4796 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:25.911000 audit[4796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffcc2d2f30 a2=0 a3=1 items=0 ppid=2430 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:25.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:25.912000 audit[4796]: NETFILTER_CFG table=nat:125 family=2 entries=27 op=nft_register_chain pid=4796 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:25.912000 audit[4796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffcc2d2f30 a2=0 a3=1 items=0 ppid=2430 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:25.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:26.025082 sshd[4786]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:26.024000 audit[4786]: USER_END pid=4786 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:26.024000 audit[4786]: CRED_DISP pid=4786 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:26.034980 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:43180.service: Deactivated successfully. Jun 25 14:28:26.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:43180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:26.035747 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 14:28:26.036304 systemd-logind[1231]: Session 17 logged out. Waiting for processes to exit. Jun 25 14:28:26.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:43184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:26.037847 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:43184.service - OpenSSH per-connection server daemon (10.0.0.1:43184). Jun 25 14:28:26.038620 systemd-logind[1231]: Removed session 17. Jun 25 14:28:26.076000 audit[4799]: USER_ACCT pid=4799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:26.078397 sshd[4799]: Accepted publickey for core from 10.0.0.1 port 43184 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:26.078000 audit[4799]: CRED_ACQ pid=4799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:26.078000 audit[4799]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8c245f0 a2=3 a3=1 items=0 ppid=1 pid=4799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:26.078000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:26.080065 sshd[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:26.084206 systemd-logind[1231]: New session 18 of user core. Jun 25 14:28:26.091555 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 14:28:26.094000 audit[4799]: USER_START pid=4799 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:26.096000 audit[4801]: CRED_ACQ pid=4801 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:27.429000 audit[4816]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4816 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:27.429000 audit[4816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=fffffd8aae90 a2=0 a3=1 items=0 ppid=2430 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:27.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:27.430000 audit[4816]: NETFILTER_CFG table=nat:127 family=2 entries=22 op=nft_register_rule pid=4816 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:27.430000 audit[4816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffffd8aae90 a2=0 a3=1 items=0 ppid=2430 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:27.430000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:27.462910 sshd[4799]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:27.465000 audit[4799]: USER_END pid=4799 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:27.465000 audit[4799]: CRED_DISP pid=4799 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:27.472992 systemd[1]: Started sshd@18-10.0.0.85:22-10.0.0.1:43186.service - OpenSSH per-connection server daemon (10.0.0.1:43186). Jun 25 14:28:27.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.85:22-10.0.0.1:43186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:27.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:43184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:27.473654 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:43184.service: Deactivated successfully. Jun 25 14:28:27.475234 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 14:28:27.476519 systemd-logind[1231]: Session 18 logged out. Waiting for processes to exit. Jun 25 14:28:27.478236 systemd-logind[1231]: Removed session 18. Jun 25 14:28:27.526964 sshd[4818]: Accepted publickey for core from 10.0.0.1 port 43186 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:27.524000 audit[4818]: USER_ACCT pid=4818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:27.527000 audit[4818]: CRED_ACQ pid=4818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:27.527000 audit[4818]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc9799c90 a2=3 a3=1 items=0 ppid=1 pid=4818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:27.527000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:27.529748 sshd[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:27.537227 systemd-logind[1231]: New session 19 of user core. Jun 25 14:28:27.543654 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 14:28:27.557000 audit[4818]: USER_START pid=4818 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:27.560000 audit[4821]: CRED_ACQ pid=4821 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:28.115318 sshd[4818]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:28.115000 audit[4818]: USER_END pid=4818 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:28.115000 audit[4818]: CRED_DISP pid=4818 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:28.122990 systemd[1]: sshd@18-10.0.0.85:22-10.0.0.1:43186.service: Deactivated successfully. Jun 25 14:28:28.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.85:22-10.0.0.1:43186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:28.123875 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 14:28:28.124530 systemd-logind[1231]: Session 19 logged out. Waiting for processes to exit. Jun 25 14:28:28.132173 systemd[1]: Started sshd@19-10.0.0.85:22-10.0.0.1:43188.service - OpenSSH per-connection server daemon (10.0.0.1:43188). Jun 25 14:28:28.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.85:22-10.0.0.1:43188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:28.133670 systemd-logind[1231]: Removed session 19. Jun 25 14:28:28.165000 audit[4830]: USER_ACCT pid=4830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:28.167521 sshd[4830]: Accepted publickey for core from 10.0.0.1 port 43188 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:28.167000 audit[4830]: CRED_ACQ pid=4830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:28.167000 audit[4830]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe0f18800 a2=3 a3=1 items=0 ppid=1 pid=4830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:28.167000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:28.169133 sshd[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:28.174493 systemd-logind[1231]: New session 20 of user core. Jun 25 14:28:28.181570 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 14:28:28.183000 audit[4830]: USER_START pid=4830 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:28.185000 audit[4832]: CRED_ACQ pid=4832 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:28.316810 sshd[4830]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:28.316000 audit[4830]: USER_END pid=4830 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:28.316000 audit[4830]: CRED_DISP pid=4830 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:28.319627 systemd-logind[1231]: Session 20 logged out. Waiting for processes to exit. Jun 25 14:28:28.319899 systemd[1]: sshd@19-10.0.0.85:22-10.0.0.1:43188.service: Deactivated successfully. Jun 25 14:28:28.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.85:22-10.0.0.1:43188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:28.320845 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 14:28:28.321525 systemd-logind[1231]: Removed session 20. Jun 25 14:28:28.446000 audit[4844]: NETFILTER_CFG table=filter:128 family=2 entries=32 op=nft_register_rule pid=4844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:28.446000 audit[4844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=fffffe81fbf0 a2=0 a3=1 items=0 ppid=2430 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:28.446000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:28.447000 audit[4844]: NETFILTER_CFG table=nat:129 family=2 entries=22 op=nft_register_rule pid=4844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:28.447000 audit[4844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffffe81fbf0 a2=0 a3=1 items=0 ppid=2430 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:28.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:30.652000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:30.655025 kernel: kauditd_printk_skb: 63 callbacks suppressed Jun 25 14:28:30.655078 kernel: audit: type=1400 audit(1719325710.652:742): avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:30.652000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40029260e0 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:28:30.660124 kernel: audit: type=1300 audit(1719325710.652:742): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40029260e0 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:28:30.660209 kernel: audit: type=1327 audit(1719325710.652:742): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:28:30.652000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:28:30.653000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:30.664567 kernel: audit: type=1400 audit(1719325710.653:743): avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:30.664629 kernel: audit: type=1300 audit(1719325710.653:743): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4002926100 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:28:30.653000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4002926100 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:28:30.653000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:28:30.669795 kernel: audit: type=1327 audit(1719325710.653:743): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:28:30.669826 kernel: audit: type=1400 audit(1719325710.654:744): avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:30.654000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:30.671858 kernel: audit: type=1300 audit(1719325710.654:744): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40029262a0 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:28:30.654000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40029262a0 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:28:30.654000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:28:30.677497 kernel: audit: type=1327 audit(1719325710.654:744): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:28:30.677553 kernel: audit: type=1400 audit(1719325710.655:745): avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:30.655000 audit[2132]: AVC avc: denied { watch } for pid=2132 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6258 scontext=system_u:system_r:container_t:s0:c232,c824 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:28:30.655000 audit[2132]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4002c60000 a2=fc6 a3=0 items=0 ppid=1972 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c232,c824 key=(null) Jun 25 14:28:30.655000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:28:31.919000 audit[4852]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=4852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:31.919000 audit[4852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffda7c6330 a2=0 a3=1 items=0 ppid=2430 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:31.919000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:31.922000 audit[4852]: NETFILTER_CFG table=nat:131 family=2 entries=106 op=nft_register_chain pid=4852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:28:31.922000 audit[4852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffda7c6330 a2=0 a3=1 items=0 ppid=2430 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:31.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:28:33.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:49004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:33.328403 systemd[1]: Started sshd@20-10.0.0.85:22-10.0.0.1:49004.service - OpenSSH per-connection server daemon (10.0.0.1:49004). Jun 25 14:28:33.365000 audit[4857]: USER_ACCT pid=4857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:33.367498 sshd[4857]: Accepted publickey for core from 10.0.0.1 port 49004 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:33.366000 audit[4857]: CRED_ACQ pid=4857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:33.367000 audit[4857]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff32136c0 a2=3 a3=1 items=0 ppid=1 pid=4857 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:33.367000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:33.368677 sshd[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:33.373032 systemd-logind[1231]: New session 21 of user core. Jun 25 14:28:33.382591 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 14:28:33.385000 audit[4857]: USER_START pid=4857 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:33.386000 audit[4859]: CRED_ACQ pid=4859 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:33.498558 sshd[4857]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:33.498000 audit[4857]: USER_END pid=4857 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:33.498000 audit[4857]: CRED_DISP pid=4857 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:33.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:49004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:33.501071 systemd[1]: sshd@20-10.0.0.85:22-10.0.0.1:49004.service: Deactivated successfully. Jun 25 14:28:33.501984 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 14:28:33.502651 systemd-logind[1231]: Session 21 logged out. Waiting for processes to exit. Jun 25 14:28:33.503564 systemd-logind[1231]: Removed session 21. Jun 25 14:28:38.513078 systemd[1]: Started sshd@21-10.0.0.85:22-10.0.0.1:49014.service - OpenSSH per-connection server daemon (10.0.0.1:49014). Jun 25 14:28:38.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:49014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:38.516139 kernel: kauditd_printk_skb: 19 callbacks suppressed Jun 25 14:28:38.516219 kernel: audit: type=1130 audit(1719325718.511:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:49014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:38.542000 audit[4878]: USER_ACCT pid=4878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.543933 sshd[4878]: Accepted publickey for core from 10.0.0.1 port 49014 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:38.546393 sshd[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:38.544000 audit[4878]: CRED_ACQ pid=4878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.549039 kernel: audit: type=1101 audit(1719325718.542:758): pid=4878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.549112 kernel: audit: type=1103 audit(1719325718.544:759): pid=4878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.549131 kernel: audit: type=1006 audit(1719325718.544:760): pid=4878 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 14:28:38.544000 audit[4878]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc755ea70 a2=3 a3=1 items=0 ppid=1 pid=4878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:38.553070 kernel: audit: type=1300 audit(1719325718.544:760): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc755ea70 a2=3 a3=1 items=0 ppid=1 pid=4878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:38.553136 kernel: audit: type=1327 audit(1719325718.544:760): proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:38.544000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:38.557629 systemd-logind[1231]: New session 22 of user core. Jun 25 14:28:38.562579 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 14:28:38.565000 audit[4878]: USER_START pid=4878 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.567000 audit[4880]: CRED_ACQ pid=4880 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.572932 kernel: audit: type=1105 audit(1719325718.565:761): pid=4878 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.572986 kernel: audit: type=1103 audit(1719325718.567:762): pid=4880 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.709703 sshd[4878]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:38.709000 audit[4878]: USER_END pid=4878 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.712579 systemd[1]: sshd@21-10.0.0.85:22-10.0.0.1:49014.service: Deactivated successfully. Jun 25 14:28:38.713910 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 14:28:38.709000 audit[4878]: CRED_DISP pid=4878 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.717403 kernel: audit: type=1106 audit(1719325718.709:763): pid=4878 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.717487 kernel: audit: type=1104 audit(1719325718.709:764): pid=4878 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:38.719195 systemd-logind[1231]: Session 22 logged out. Waiting for processes to exit. Jun 25 14:28:38.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:49014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:38.720320 systemd-logind[1231]: Removed session 22. Jun 25 14:28:43.727646 systemd[1]: Started sshd@22-10.0.0.85:22-10.0.0.1:44032.service - OpenSSH per-connection server daemon (10.0.0.1:44032). Jun 25 14:28:43.731322 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:28:43.731436 kernel: audit: type=1130 audit(1719325723.726:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:44032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:43.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:44032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:43.767000 audit[4920]: USER_ACCT pid=4920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.769538 sshd[4920]: Accepted publickey for core from 10.0.0.1 port 44032 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:28:43.770401 sshd[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:28:43.768000 audit[4920]: CRED_ACQ pid=4920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.774050 kernel: audit: type=1101 audit(1719325723.767:767): pid=4920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.774116 kernel: audit: type=1103 audit(1719325723.768:768): pid=4920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.775914 kernel: audit: type=1006 audit(1719325723.768:769): pid=4920 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 14:28:43.768000 audit[4920]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffb9b76f0 a2=3 a3=1 items=0 ppid=1 pid=4920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:43.778212 systemd-logind[1231]: New session 23 of user core. Jun 25 14:28:43.779086 kernel: audit: type=1300 audit(1719325723.768:769): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffb9b76f0 a2=3 a3=1 items=0 ppid=1 pid=4920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:28:43.768000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:43.780107 kernel: audit: type=1327 audit(1719325723.768:769): proctitle=737368643A20636F7265205B707269765D Jun 25 14:28:43.785606 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 14:28:43.789000 audit[4920]: USER_START pid=4920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.790000 audit[4922]: CRED_ACQ pid=4922 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.795522 kernel: audit: type=1105 audit(1719325723.789:770): pid=4920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.795568 kernel: audit: type=1103 audit(1719325723.790:771): pid=4922 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.943278 sshd[4920]: pam_unix(sshd:session): session closed for user core Jun 25 14:28:43.942000 audit[4920]: USER_END pid=4920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.946152 systemd[1]: sshd@22-10.0.0.85:22-10.0.0.1:44032.service: Deactivated successfully. Jun 25 14:28:43.943000 audit[4920]: CRED_DISP pid=4920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.947066 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 14:28:43.947780 systemd-logind[1231]: Session 23 logged out. Waiting for processes to exit. Jun 25 14:28:43.948655 systemd-logind[1231]: Removed session 23. Jun 25 14:28:43.950014 kernel: audit: type=1106 audit(1719325723.942:772): pid=4920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.950076 kernel: audit: type=1104 audit(1719325723.943:773): pid=4920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:28:43.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:44032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:28:45.281695 systemd[1]: run-containerd-runc-k8s.io-b2a65b197e8db50b6196c3864d063996769b551eb0349b2bf20ac5604f9b8d60-runc.QgHRD2.mount: Deactivated successfully.