Jun 25 16:22:12.856558 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:22:12.856578 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:22:12.856590 kernel: BIOS-provided physical RAM map: Jun 25 16:22:12.856598 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 16:22:12.856605 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 16:22:12.856612 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 16:22:12.856621 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jun 25 16:22:12.856629 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jun 25 16:22:12.856636 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 16:22:12.856645 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 16:22:12.856653 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 25 16:22:12.856660 kernel: NX (Execute Disable) protection: active Jun 25 16:22:12.856668 kernel: SMBIOS 2.8 present. Jun 25 16:22:12.856675 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jun 25 16:22:12.856685 kernel: Hypervisor detected: KVM Jun 25 16:22:12.856695 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 16:22:12.856702 kernel: kvm-clock: using sched offset of 2562847024 cycles Jun 25 16:22:12.856711 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 16:22:12.856720 kernel: tsc: Detected 2794.750 MHz processor Jun 25 16:22:12.856728 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:22:12.856737 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:22:12.856745 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jun 25 16:22:12.856753 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:22:12.856762 kernel: Using GB pages for direct mapping Jun 25 16:22:12.856781 kernel: ACPI: Early table checksum verification disabled Jun 25 16:22:12.856789 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jun 25 16:22:12.856797 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:22:12.856805 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:22:12.856817 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:22:12.856824 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jun 25 16:22:12.856833 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:22:12.856841 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:22:12.856851 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:22:12.856859 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jun 25 16:22:12.856867 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jun 25 16:22:12.856875 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jun 25 16:22:12.856883 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jun 25 16:22:12.856892 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jun 25 16:22:12.856900 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jun 25 16:22:12.856908 kernel: No NUMA configuration found Jun 25 16:22:12.856921 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jun 25 16:22:12.856930 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jun 25 16:22:12.856939 kernel: Zone ranges: Jun 25 16:22:12.856948 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:22:12.856957 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jun 25 16:22:12.856966 kernel: Normal empty Jun 25 16:22:12.856974 kernel: Movable zone start for each node Jun 25 16:22:12.856984 kernel: Early memory node ranges Jun 25 16:22:12.856992 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 16:22:12.857001 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jun 25 16:22:12.857010 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jun 25 16:22:12.857019 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:22:12.857028 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 16:22:12.857037 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jun 25 16:22:12.857046 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 16:22:12.857054 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 16:22:12.857065 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:22:12.857086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 16:22:12.857095 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 16:22:12.857103 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:22:12.857112 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 16:22:12.857121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 16:22:12.857130 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:22:12.857139 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 16:22:12.857147 kernel: TSC deadline timer available Jun 25 16:22:12.857158 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jun 25 16:22:12.857176 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 25 16:22:12.857184 kernel: kvm-guest: setup PV sched yield Jun 25 16:22:12.857194 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jun 25 16:22:12.857203 kernel: Booting paravirtualized kernel on KVM Jun 25 16:22:12.857212 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:22:12.857221 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 25 16:22:12.857230 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u524288 Jun 25 16:22:12.857239 kernel: pcpu-alloc: s194792 r8192 d30488 u524288 alloc=1*2097152 Jun 25 16:22:12.857250 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 25 16:22:12.857258 kernel: kvm-guest: PV spinlocks enabled Jun 25 16:22:12.857267 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 16:22:12.857276 kernel: Fallback order for Node 0: 0 Jun 25 16:22:12.857285 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jun 25 16:22:12.857294 kernel: Policy zone: DMA32 Jun 25 16:22:12.857304 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:22:12.857313 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:22:12.857322 kernel: random: crng init done Jun 25 16:22:12.857333 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 16:22:12.857342 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:22:12.857351 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:22:12.857360 kernel: Memory: 2430544K/2571756K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 140952K reserved, 0K cma-reserved) Jun 25 16:22:12.857369 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 16:22:12.857378 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:22:12.857387 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:22:12.857396 kernel: Dynamic Preempt: voluntary Jun 25 16:22:12.857407 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:22:12.857416 kernel: rcu: RCU event tracing is enabled. Jun 25 16:22:12.857425 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 16:22:12.857435 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:22:12.857444 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:22:12.857453 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:22:12.857462 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:22:12.857472 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 16:22:12.857480 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 25 16:22:12.857489 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:22:12.857500 kernel: Console: colour VGA+ 80x25 Jun 25 16:22:12.857509 kernel: printk: console [ttyS0] enabled Jun 25 16:22:12.857518 kernel: ACPI: Core revision 20220331 Jun 25 16:22:12.857527 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 16:22:12.857536 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:22:12.857545 kernel: x2apic enabled Jun 25 16:22:12.857554 kernel: Switched APIC routing to physical x2apic. Jun 25 16:22:12.857563 kernel: kvm-guest: setup PV IPIs Jun 25 16:22:12.857572 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 16:22:12.857583 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 16:22:12.857592 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jun 25 16:22:12.857601 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 25 16:22:12.857610 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 25 16:22:12.857619 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 25 16:22:12.857627 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:22:12.857636 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:22:12.857646 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:22:12.857662 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:22:12.857671 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 25 16:22:12.857680 kernel: RETBleed: Mitigation: untrained return thunk Jun 25 16:22:12.857691 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 16:22:12.857701 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 16:22:12.857710 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:22:12.857719 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:22:12.857728 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:22:12.857738 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:22:12.857749 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 16:22:12.857758 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:22:12.857768 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:22:12.857777 kernel: LSM: Security Framework initializing Jun 25 16:22:12.857786 kernel: SELinux: Initializing. Jun 25 16:22:12.857796 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 16:22:12.857805 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 16:22:12.857815 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 25 16:22:12.857826 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:22:12.857835 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:22:12.857844 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:22:12.857853 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:22:12.857863 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:22:12.857872 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:22:12.857880 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 25 16:22:12.857890 kernel: ... version: 0 Jun 25 16:22:12.857899 kernel: ... bit width: 48 Jun 25 16:22:12.857908 kernel: ... generic registers: 6 Jun 25 16:22:12.857919 kernel: ... value mask: 0000ffffffffffff Jun 25 16:22:12.857928 kernel: ... max period: 00007fffffffffff Jun 25 16:22:12.857938 kernel: ... fixed-purpose events: 0 Jun 25 16:22:12.857947 kernel: ... event mask: 000000000000003f Jun 25 16:22:12.857956 kernel: signal: max sigframe size: 1776 Jun 25 16:22:12.857965 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:22:12.857975 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:22:12.857984 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:22:12.857993 kernel: x86: Booting SMP configuration: Jun 25 16:22:12.858004 kernel: .... node #0, CPUs: #1 #2 #3 Jun 25 16:22:12.858013 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 16:22:12.858022 kernel: smpboot: Max logical packages: 1 Jun 25 16:22:12.858032 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jun 25 16:22:12.858041 kernel: devtmpfs: initialized Jun 25 16:22:12.858050 kernel: x86/mm: Memory block size: 128MB Jun 25 16:22:12.858059 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:22:12.858119 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 16:22:12.858129 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:22:12.858140 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:22:12.858150 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:22:12.858159 kernel: audit: type=2000 audit(1719332531.912:1): state=initialized audit_enabled=0 res=1 Jun 25 16:22:12.858176 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:22:12.858186 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:22:12.858195 kernel: cpuidle: using governor menu Jun 25 16:22:12.858204 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:22:12.858213 kernel: dca service started, version 1.12.1 Jun 25 16:22:12.858223 kernel: PCI: Using configuration type 1 for base access Jun 25 16:22:12.858233 kernel: PCI: Using configuration type 1 for extended access Jun 25 16:22:12.858243 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:22:12.858252 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:22:12.858262 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:22:12.858271 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:22:12.858280 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:22:12.858289 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:22:12.858299 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:22:12.858308 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:22:12.858319 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:22:12.858328 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:22:12.858337 kernel: ACPI: Interpreter enabled Jun 25 16:22:12.858346 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 16:22:12.858355 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:22:12.858365 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:22:12.858375 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:22:12.858384 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 16:22:12.858393 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 16:22:12.858532 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:22:12.858548 kernel: acpiphp: Slot [3] registered Jun 25 16:22:12.858557 kernel: acpiphp: Slot [4] registered Jun 25 16:22:12.858567 kernel: acpiphp: Slot [5] registered Jun 25 16:22:12.858576 kernel: acpiphp: Slot [6] registered Jun 25 16:22:12.858585 kernel: acpiphp: Slot [7] registered Jun 25 16:22:12.858594 kernel: acpiphp: Slot [8] registered Jun 25 16:22:12.858603 kernel: acpiphp: Slot [9] registered Jun 25 16:22:12.858612 kernel: acpiphp: Slot [10] registered Jun 25 16:22:12.858623 kernel: acpiphp: Slot [11] registered Jun 25 16:22:12.858633 kernel: acpiphp: Slot [12] registered Jun 25 16:22:12.858642 kernel: acpiphp: Slot [13] registered Jun 25 16:22:12.858651 kernel: acpiphp: Slot [14] registered Jun 25 16:22:12.858660 kernel: acpiphp: Slot [15] registered Jun 25 16:22:12.858668 kernel: acpiphp: Slot [16] registered Jun 25 16:22:12.858677 kernel: acpiphp: Slot [17] registered Jun 25 16:22:12.858685 kernel: acpiphp: Slot [18] registered Jun 25 16:22:12.858694 kernel: acpiphp: Slot [19] registered Jun 25 16:22:12.858704 kernel: acpiphp: Slot [20] registered Jun 25 16:22:12.858712 kernel: acpiphp: Slot [21] registered Jun 25 16:22:12.858722 kernel: acpiphp: Slot [22] registered Jun 25 16:22:12.858731 kernel: acpiphp: Slot [23] registered Jun 25 16:22:12.858739 kernel: acpiphp: Slot [24] registered Jun 25 16:22:12.858748 kernel: acpiphp: Slot [25] registered Jun 25 16:22:12.858759 kernel: acpiphp: Slot [26] registered Jun 25 16:22:12.858768 kernel: acpiphp: Slot [27] registered Jun 25 16:22:12.858776 kernel: acpiphp: Slot [28] registered Jun 25 16:22:12.858785 kernel: acpiphp: Slot [29] registered Jun 25 16:22:12.858796 kernel: acpiphp: Slot [30] registered Jun 25 16:22:12.858805 kernel: acpiphp: Slot [31] registered Jun 25 16:22:12.858814 kernel: PCI host bridge to bus 0000:00 Jun 25 16:22:12.858912 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:22:12.858990 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 16:22:12.859065 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:22:12.859169 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jun 25 16:22:12.859251 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 16:22:12.859326 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 16:22:12.859427 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 16:22:12.859520 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 16:22:12.859614 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 16:22:12.859700 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jun 25 16:22:12.859788 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 16:22:12.859873 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 16:22:12.859957 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 16:22:12.860042 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 16:22:12.860156 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 16:22:12.860256 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 16:22:12.860341 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 16:22:12.860437 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jun 25 16:22:12.860523 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jun 25 16:22:12.860608 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jun 25 16:22:12.860694 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jun 25 16:22:12.860780 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:22:12.860872 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 16:22:12.860960 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jun 25 16:22:12.861053 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jun 25 16:22:12.861171 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jun 25 16:22:12.861270 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 16:22:12.861359 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 16:22:12.861447 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jun 25 16:22:12.861534 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jun 25 16:22:12.861629 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jun 25 16:22:12.861723 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jun 25 16:22:12.861811 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jun 25 16:22:12.861897 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jun 25 16:22:12.861987 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jun 25 16:22:12.861999 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 16:22:12.862007 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 16:22:12.862016 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:22:12.862024 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 16:22:12.862035 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 16:22:12.862043 kernel: iommu: Default domain type: Translated Jun 25 16:22:12.862052 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:22:12.862060 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:22:12.862082 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:22:12.862091 kernel: PTP clock support registered Jun 25 16:22:12.862100 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:22:12.862108 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:22:12.862117 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 16:22:12.862127 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jun 25 16:22:12.862229 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 16:22:12.862311 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 16:22:12.862396 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:22:12.862408 kernel: vgaarb: loaded Jun 25 16:22:12.862417 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 16:22:12.862426 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 16:22:12.862435 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 16:22:12.862446 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:22:12.862455 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:22:12.862464 kernel: pnp: PnP ACPI init Jun 25 16:22:12.862551 kernel: pnp 00:02: [dma 2] Jun 25 16:22:12.862563 kernel: pnp: PnP ACPI: found 6 devices Jun 25 16:22:12.862572 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:22:12.862581 kernel: NET: Registered PF_INET protocol family Jun 25 16:22:12.862590 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 16:22:12.862601 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 16:22:12.862610 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:22:12.862619 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:22:12.862628 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 16:22:12.862637 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 16:22:12.862645 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 16:22:12.862654 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 16:22:12.862663 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:22:12.862672 kernel: NET: Registered PF_XDP protocol family Jun 25 16:22:12.862748 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 16:22:12.862822 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 16:22:12.862897 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 16:22:12.862971 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jun 25 16:22:12.863045 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 16:22:12.863148 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 16:22:12.863247 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:22:12.863259 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:22:12.863270 kernel: Initialise system trusted keyrings Jun 25 16:22:12.863279 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 16:22:12.863288 kernel: Key type asymmetric registered Jun 25 16:22:12.863296 kernel: Asymmetric key parser 'x509' registered Jun 25 16:22:12.863305 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:22:12.863313 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:22:12.863322 kernel: io scheduler mq-deadline registered Jun 25 16:22:12.863331 kernel: io scheduler kyber registered Jun 25 16:22:12.863339 kernel: io scheduler bfq registered Jun 25 16:22:12.863350 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:22:12.863359 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 16:22:12.863367 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 25 16:22:12.863376 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 16:22:12.863395 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:22:12.863403 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:22:12.863420 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 16:22:12.863442 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:22:12.863458 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:22:12.863594 kernel: rtc_cmos 00:05: RTC can wake from S4 Jun 25 16:22:12.863609 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:22:12.863687 kernel: rtc_cmos 00:05: registered as rtc0 Jun 25 16:22:12.863766 kernel: rtc_cmos 00:05: setting system clock to 2024-06-25T16:22:12 UTC (1719332532) Jun 25 16:22:12.863844 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 25 16:22:12.863855 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:22:12.863864 kernel: Segment Routing with IPv6 Jun 25 16:22:12.863874 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:22:12.863885 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:22:12.863894 kernel: Key type dns_resolver registered Jun 25 16:22:12.863903 kernel: IPI shorthand broadcast: enabled Jun 25 16:22:12.863912 kernel: sched_clock: Marking stable (636136760, 110007578)->(760624619, -14480281) Jun 25 16:22:12.863922 kernel: registered taskstats version 1 Jun 25 16:22:12.863931 kernel: Loading compiled-in X.509 certificates Jun 25 16:22:12.863940 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:22:12.863949 kernel: Key type .fscrypt registered Jun 25 16:22:12.863958 kernel: Key type fscrypt-provisioning registered Jun 25 16:22:12.863968 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:22:12.863977 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:22:12.863987 kernel: ima: No architecture policies found Jun 25 16:22:12.863996 kernel: clk: Disabling unused clocks Jun 25 16:22:12.864005 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:22:12.864014 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:22:12.864023 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:22:12.864032 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:22:12.864041 kernel: Run /init as init process Jun 25 16:22:12.864052 kernel: with arguments: Jun 25 16:22:12.864061 kernel: /init Jun 25 16:22:12.864081 kernel: with environment: Jun 25 16:22:12.864090 kernel: HOME=/ Jun 25 16:22:12.864099 kernel: TERM=linux Jun 25 16:22:12.864110 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:22:12.864134 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:22:12.864147 systemd[1]: Detected virtualization kvm. Jun 25 16:22:12.864158 systemd[1]: Detected architecture x86-64. Jun 25 16:22:12.864176 systemd[1]: Running in initrd. Jun 25 16:22:12.864186 systemd[1]: No hostname configured, using default hostname. Jun 25 16:22:12.864195 systemd[1]: Hostname set to . Jun 25 16:22:12.864206 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:22:12.864216 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:22:12.864226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:22:12.864238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:22:12.864248 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:22:12.864258 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:22:12.864270 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:22:12.864279 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:22:12.864290 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:22:12.864300 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:22:12.864312 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:22:12.864322 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:22:12.864333 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:22:12.864343 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:22:12.864353 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:22:12.864363 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:22:12.864374 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:22:12.864384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:22:12.864394 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:22:12.864406 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:22:12.864416 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:22:12.864426 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:22:12.864437 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:22:12.864449 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:22:12.864461 kernel: audit: type=1130 audit(1719332532.855:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.864471 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:22:12.864484 systemd-journald[195]: Journal started Jun 25 16:22:12.864527 systemd-journald[195]: Runtime Journal (/run/log/journal/ee73eefb5bf647cd9941fd5c1c8a6683) is 6.0M, max 48.4M, 42.3M free. Jun 25 16:22:12.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.864912 systemd-modules-load[196]: Inserted module 'overlay' Jun 25 16:22:12.900339 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:22:12.900357 kernel: Bridge firewalling registered Jun 25 16:22:12.888744 systemd-modules-load[196]: Inserted module 'br_netfilter' Jun 25 16:22:12.902194 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:22:12.902209 kernel: audit: type=1130 audit(1719332532.900:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.905882 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:22:12.913193 kernel: audit: type=1130 audit(1719332532.905:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.913210 kernel: audit: type=1130 audit(1719332532.910:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.914092 kernel: SCSI subsystem initialized Jun 25 16:22:12.923204 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:22:12.923835 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:22:12.924942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:22:12.931583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:22:12.934579 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:22:12.934596 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:22:12.934607 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:22:12.933342 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:22:12.937367 kernel: audit: type=1130 audit(1719332532.933:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.935824 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:22:12.940115 kernel: audit: type=1130 audit(1719332532.934:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.941108 kernel: audit: type=1334 audit(1719332532.934:8): prog-id=6 op=LOAD Jun 25 16:22:12.934000 audit: BPF prog-id=6 op=LOAD Jun 25 16:22:12.950028 systemd-modules-load[196]: Inserted module 'dm_multipath' Jun 25 16:22:12.950706 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:22:12.951527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:22:12.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.955086 kernel: audit: type=1130 audit(1719332532.950:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.962492 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:22:12.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.966089 kernel: audit: type=1130 audit(1719332532.962:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.967207 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:22:12.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.969843 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:22:12.978486 systemd-resolved[204]: Positive Trust Anchors: Jun 25 16:22:12.978502 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:22:12.978543 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:22:12.981386 systemd-resolved[204]: Defaulting to hostname 'linux'. Jun 25 16:22:12.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:12.982193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:22:12.986940 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:22:12.992919 dracut-cmdline[220]: dracut-dracut-053 Jun 25 16:22:12.994793 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:22:13.055094 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:22:13.070106 kernel: iscsi: registered transport (tcp) Jun 25 16:22:13.097131 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:22:13.097180 kernel: QLogic iSCSI HBA Driver Jun 25 16:22:13.131598 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:22:13.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:13.138305 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:22:13.202096 kernel: raid6: avx2x4 gen() 24701 MB/s Jun 25 16:22:13.219095 kernel: raid6: avx2x2 gen() 23176 MB/s Jun 25 16:22:13.243296 kernel: raid6: avx2x1 gen() 15367 MB/s Jun 25 16:22:13.243331 kernel: raid6: using algorithm avx2x4 gen() 24701 MB/s Jun 25 16:22:13.261465 kernel: raid6: .... xor() 5710 MB/s, rmw enabled Jun 25 16:22:13.261529 kernel: raid6: using avx2x2 recovery algorithm Jun 25 16:22:13.266087 kernel: xor: automatically using best checksumming function avx Jun 25 16:22:13.433108 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:22:13.442327 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:22:13.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:13.442000 audit: BPF prog-id=7 op=LOAD Jun 25 16:22:13.442000 audit: BPF prog-id=8 op=LOAD Jun 25 16:22:13.457196 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:22:13.469449 systemd-udevd[396]: Using default interface naming scheme 'v252'. Jun 25 16:22:13.473292 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:22:13.476670 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:22:13.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:13.486334 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jun 25 16:22:13.513018 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:22:13.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:13.555246 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:22:13.587168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:22:13.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:13.621100 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 25 16:22:13.652790 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 16:22:13.652914 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:22:13.652934 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:22:13.652946 kernel: AES CTR mode by8 optimization enabled Jun 25 16:22:13.652957 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 16:22:13.652969 kernel: GPT:9289727 != 19775487 Jun 25 16:22:13.652979 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 16:22:13.652995 kernel: GPT:9289727 != 19775487 Jun 25 16:22:13.653011 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 16:22:13.653021 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:22:13.655103 kernel: libata version 3.00 loaded. Jun 25 16:22:13.659088 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 16:22:13.678560 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (448) Jun 25 16:22:13.678579 kernel: scsi host0: ata_piix Jun 25 16:22:13.678706 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (503) Jun 25 16:22:13.678719 kernel: scsi host1: ata_piix Jun 25 16:22:13.678831 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jun 25 16:22:13.678845 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jun 25 16:22:13.679209 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:22:13.711764 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 16:22:13.711841 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 16:22:13.722471 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 16:22:13.734485 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:22:13.750665 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:22:13.841511 kernel: ata2: found unknown device (class 0) Jun 25 16:22:13.841589 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 25 16:22:13.844120 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 25 16:22:13.894467 disk-uuid[521]: Primary Header is updated. Jun 25 16:22:13.894467 disk-uuid[521]: Secondary Entries is updated. Jun 25 16:22:13.894467 disk-uuid[521]: Secondary Header is updated. Jun 25 16:22:13.898102 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:22:13.902096 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:22:13.909107 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 25 16:22:13.935440 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 16:22:13.935460 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 25 16:22:14.914855 disk-uuid[530]: The operation has completed successfully. Jun 25 16:22:14.916183 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:22:14.931624 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:22:14.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:14.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:14.931713 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:22:14.978355 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:22:15.044181 sh[545]: Success Jun 25 16:22:15.053113 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 25 16:22:15.078307 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:22:15.086863 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:22:15.090488 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:22:15.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:15.096487 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:22:15.096515 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:22:15.096523 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:22:15.097502 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:22:15.098247 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:22:15.102794 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:22:15.102972 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:22:15.117217 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:22:15.118723 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:22:15.178814 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:22:15.178880 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:22:15.178889 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:22:15.182849 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:22:15.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:15.183000 audit: BPF prog-id=9 op=LOAD Jun 25 16:22:15.189081 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:22:15.194276 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:22:15.195649 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:22:15.213569 systemd-networkd[723]: lo: Link UP Jun 25 16:22:15.213579 systemd-networkd[723]: lo: Gained carrier Jun 25 16:22:15.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:15.213942 systemd-networkd[723]: Enumeration completed Jun 25 16:22:15.214015 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:22:15.214153 systemd-networkd[723]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:22:15.214156 systemd-networkd[723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:22:15.220275 systemd-networkd[723]: eth0: Link UP Jun 25 16:22:15.220278 systemd-networkd[723]: eth0: Gained carrier Jun 25 16:22:15.220283 systemd-networkd[723]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:22:15.221491 systemd[1]: Reached target network.target - Network. Jun 25 16:22:15.232200 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:22:15.235935 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:22:15.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:15.238263 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:22:15.239195 systemd-networkd[723]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 16:22:15.241663 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:22:15.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:15.244613 iscsid[729]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:22:15.244613 iscsid[729]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:22:15.244613 iscsid[729]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:22:15.244613 iscsid[729]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:22:15.244613 iscsid[729]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:22:15.244613 iscsid[729]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:22:15.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:15.243484 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:22:15.253213 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:22:15.283732 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:22:15.285969 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:22:15.288269 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:22:15.354203 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:22:15.361888 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:22:15.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:15.582007 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:22:15.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:15.659241 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:22:15.699968 ignition[744]: Ignition 2.15.0 Jun 25 16:22:15.699978 ignition[744]: Stage: fetch-offline Jun 25 16:22:15.700008 ignition[744]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:15.700016 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:22:15.700124 ignition[744]: parsed url from cmdline: "" Jun 25 16:22:15.700127 ignition[744]: no config URL provided Jun 25 16:22:15.700131 ignition[744]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:22:15.700137 ignition[744]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:22:15.700159 ignition[744]: op(1): [started] loading QEMU firmware config module Jun 25 16:22:15.700163 ignition[744]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 16:22:15.707825 ignition[744]: op(1): [finished] loading QEMU firmware config module Jun 25 16:22:15.748845 ignition[744]: parsing config with SHA512: 071d2b5cc99854fa63b9c8b33729feee649148216be67ef6602ea0b827e2632bfa3b09447a86e16213a191f736f09c87a37200ad3a01773e39ad252c9049b703 Jun 25 16:22:15.753047 unknown[744]: fetched base config from "system" Jun 25 16:22:15.753060 unknown[744]: fetched user config from "qemu" Jun 25 16:22:15.753469 ignition[744]: fetch-offline: fetch-offline passed Jun 25 16:22:15.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:15.829113 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:22:15.753517 ignition[744]: Ignition finished successfully Jun 25 16:22:15.830754 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 16:22:15.838232 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:22:15.850384 ignition[756]: Ignition 2.15.0 Jun 25 16:22:15.850394 ignition[756]: Stage: kargs Jun 25 16:22:15.850498 ignition[756]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:15.850508 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:22:15.853570 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:22:15.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:15.851426 ignition[756]: kargs: kargs passed Jun 25 16:22:15.851463 ignition[756]: Ignition finished successfully Jun 25 16:22:15.871247 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:22:15.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:15.880851 ignition[764]: Ignition 2.15.0 Jun 25 16:22:15.912473 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:22:15.880858 ignition[764]: Stage: disks Jun 25 16:22:15.914810 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:22:15.880959 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:15.916248 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:22:15.880971 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:22:15.918323 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:22:15.882379 ignition[764]: disks: disks passed Jun 25 16:22:15.919616 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:22:15.882422 ignition[764]: Ignition finished successfully Jun 25 16:22:15.919657 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:22:15.920692 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:22:15.958384 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 16:22:16.155624 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:22:16.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:16.165239 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:22:16.242104 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:22:16.242425 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:22:16.244102 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:22:16.254208 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:22:16.255209 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:22:16.257272 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:22:16.266284 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (780) Jun 25 16:22:16.266310 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:22:16.266320 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:22:16.266332 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:22:16.257310 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:22:16.257335 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:22:16.260463 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:22:16.267273 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:22:16.271230 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:22:16.296828 initrd-setup-root[804]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:22:16.300414 initrd-setup-root[811]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:22:16.303391 initrd-setup-root[818]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:22:16.306680 initrd-setup-root[825]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:22:16.361497 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:22:16.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:16.373180 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:22:16.376087 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:22:16.378993 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:22:16.381572 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:22:16.394475 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:22:16.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:16.401275 ignition[892]: INFO : Ignition 2.15.0 Jun 25 16:22:16.401275 ignition[892]: INFO : Stage: mount Jun 25 16:22:16.403181 ignition[892]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:16.403181 ignition[892]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:22:16.403181 ignition[892]: INFO : mount: mount passed Jun 25 16:22:16.403181 ignition[892]: INFO : Ignition finished successfully Jun 25 16:22:16.408111 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:22:16.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:16.418252 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:22:17.253271 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:22:17.259083 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (903) Jun 25 16:22:17.264132 systemd-networkd[723]: eth0: Gained IPv6LL Jun 25 16:22:17.282410 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:22:17.282436 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:22:17.282445 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:22:17.285251 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:22:17.305491 ignition[921]: INFO : Ignition 2.15.0 Jun 25 16:22:17.306427 ignition[921]: INFO : Stage: files Jun 25 16:22:17.306427 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:17.306427 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:22:17.309620 ignition[921]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:22:17.309620 ignition[921]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:22:17.309620 ignition[921]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:22:17.313887 ignition[921]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:22:17.313887 ignition[921]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:22:17.313887 ignition[921]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:22:17.313887 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:22:17.313887 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:22:17.311710 unknown[921]: wrote ssh authorized keys file for user: core Jun 25 16:22:17.341990 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:22:17.416789 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:22:17.416789 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:22:17.421013 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:22:17.422913 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:22:17.425239 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:22:17.427155 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:22:17.429162 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:22:17.431095 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:22:17.433264 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:22:17.435179 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:22:17.437100 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:22:17.438888 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:22:17.441360 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:22:17.443916 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:22:17.464782 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jun 25 16:22:17.818388 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:22:18.215259 ignition[921]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:22:18.215259 ignition[921]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 16:22:18.219648 ignition[921]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:22:18.219648 ignition[921]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:22:18.219648 ignition[921]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 16:22:18.219648 ignition[921]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 25 16:22:18.219648 ignition[921]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:22:18.219648 ignition[921]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:22:18.219648 ignition[921]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 25 16:22:18.219648 ignition[921]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 16:22:18.219648 ignition[921]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:22:18.237915 ignition[921]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:22:18.239489 ignition[921]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 16:22:18.239489 ignition[921]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:22:18.239489 ignition[921]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:22:18.239489 ignition[921]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:22:18.239489 ignition[921]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:22:18.239489 ignition[921]: INFO : files: files passed Jun 25 16:22:18.239489 ignition[921]: INFO : Ignition finished successfully Jun 25 16:22:18.253789 kernel: kauditd_printk_skb: 27 callbacks suppressed Jun 25 16:22:18.253811 kernel: audit: type=1130 audit(1719332538.242:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.239854 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:22:18.252285 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:22:18.254562 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:22:18.265709 kernel: audit: type=1130 audit(1719332538.258:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.265730 kernel: audit: type=1131 audit(1719332538.258:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.265748 kernel: audit: type=1130 audit(1719332538.265:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.256005 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:22:18.270093 initrd-setup-root-after-ignition[946]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 16:22:18.256130 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:22:18.272703 initrd-setup-root-after-ignition[948]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:22:18.272703 initrd-setup-root-after-ignition[948]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:22:18.264555 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:22:18.278994 initrd-setup-root-after-ignition[952]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:22:18.265787 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:22:18.277241 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:22:18.288870 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:22:18.288947 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:22:18.298281 kernel: audit: type=1130 audit(1719332538.291:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.298301 kernel: audit: type=1131 audit(1719332538.291:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.291218 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:22:18.298283 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:22:18.299387 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:22:18.300225 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:22:18.311918 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:22:18.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.318095 kernel: audit: type=1130 audit(1719332538.314:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.320269 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:22:18.329487 systemd[1]: Stopped target network.target - Network. Jun 25 16:22:18.330495 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:22:18.332357 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:22:18.334575 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:22:18.336960 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:22:18.343965 kernel: audit: type=1131 audit(1719332538.339:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.337100 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:22:18.339389 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:22:18.344250 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:22:18.346444 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:22:18.349138 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:22:18.351394 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:22:18.353714 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:22:18.356113 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:22:18.358552 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:22:18.360568 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:22:18.362752 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:22:18.364825 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:22:18.374148 kernel: audit: type=1131 audit(1719332538.369:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.366556 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:22:18.366709 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:22:18.381884 kernel: audit: type=1131 audit(1719332538.376:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.369452 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:22:18.374310 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:22:18.374457 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:22:18.376855 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:22:18.376978 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:22:18.381990 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:22:18.383197 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:22:18.387200 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:22:18.388990 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:22:18.391816 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:22:18.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.393766 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:22:18.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.393873 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:22:18.396143 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:22:18.396237 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:22:18.398859 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:22:18.398992 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:22:18.401046 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:22:18.401177 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:22:18.410321 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:22:18.412319 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:22:18.413749 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:22:18.416236 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:22:18.418152 systemd-networkd[723]: eth0: DHCPv6 lease lost Jun 25 16:22:18.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.418614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:22:18.418742 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:22:18.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.422915 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:22:18.423898 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:22:18.430749 ignition[967]: INFO : Ignition 2.15.0 Jun 25 16:22:18.431974 ignition[967]: INFO : Stage: umount Jun 25 16:22:18.431974 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:22:18.431974 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:22:18.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.437574 ignition[967]: INFO : umount: umount passed Jun 25 16:22:18.437574 ignition[967]: INFO : Ignition finished successfully Jun 25 16:22:18.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.439000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:22:18.439000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:22:18.432608 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:22:18.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.433537 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:22:18.433638 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:22:18.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.435738 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:22:18.435830 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:22:18.438050 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:22:18.438175 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:22:18.439779 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:22:18.439866 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:22:18.441337 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:22:18.441377 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:22:18.442479 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:22:18.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.442518 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:22:18.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.444428 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:22:18.444497 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:22:18.454237 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:22:18.455782 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:22:18.455841 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:22:18.458054 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:22:18.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.458118 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:22:18.460537 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:22:18.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.460579 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:22:18.461820 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:22:18.461862 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:22:18.465889 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:22:18.470704 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:22:18.470793 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:22:18.471422 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:22:18.471529 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:22:18.473562 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:22:18.473629 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:22:18.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.475697 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:22:18.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.475781 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:22:18.488095 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:22:18.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.488132 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:22:18.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.489824 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:22:18.489851 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:22:18.491882 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:22:18.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:18.491917 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:22:18.494248 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:22:18.494279 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:22:18.495384 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:22:18.495413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:22:18.497404 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:22:18.497432 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:22:18.500169 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:22:18.501649 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 16:22:18.501688 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:22:18.504343 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:22:18.504376 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:22:18.505757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:22:18.505791 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:22:18.507723 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 16:22:18.508171 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:22:18.508247 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:22:18.510284 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:22:18.510344 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:22:18.512209 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:22:18.539224 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jun 25 16:22:18.539254 iscsid[729]: iscsid shutting down. Jun 25 16:22:18.514838 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:22:18.521621 systemd[1]: Switching root. Jun 25 16:22:18.541573 systemd-journald[195]: Journal stopped Jun 25 16:22:19.636880 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:22:19.636926 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:22:19.636937 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:22:19.636947 kernel: SELinux: policy capability open_perms=1 Jun 25 16:22:19.636956 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:22:19.636965 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:22:19.636973 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:22:19.636981 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:22:19.637004 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:22:19.637017 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:22:19.637029 systemd[1]: Successfully loaded SELinux policy in 38.686ms. Jun 25 16:22:19.637045 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.592ms. Jun 25 16:22:19.637056 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:22:19.637067 systemd[1]: Detected virtualization kvm. Jun 25 16:22:19.637097 systemd[1]: Detected architecture x86-64. Jun 25 16:22:19.637107 systemd[1]: Detected first boot. Jun 25 16:22:19.637115 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:22:19.637128 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:22:19.637138 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:22:19.637147 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:22:19.637156 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:22:19.637165 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:22:19.637512 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:22:19.637524 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:22:19.637537 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:22:19.637547 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:22:19.637559 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:22:19.637570 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:22:19.637581 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:22:19.637592 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:22:19.637604 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:22:19.637615 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:22:19.637626 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:22:19.637639 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:22:19.637650 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:22:19.637661 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:22:19.637673 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:22:19.637684 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:22:19.637695 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:22:19.637706 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:22:19.637717 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:22:19.637730 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:22:19.637741 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:22:19.637752 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:22:19.637763 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:22:19.637778 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:22:19.637792 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:22:19.637803 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:22:19.637815 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:22:19.637828 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:22:19.637839 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:22:19.637850 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:22:19.637862 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:22:19.637873 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:22:19.637884 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:22:19.637895 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:19.637906 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:22:19.637917 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:22:19.637929 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:22:19.637940 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:22:19.637951 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:22:19.637962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:22:19.637973 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:22:19.637999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:22:19.638011 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:22:19.638023 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:22:19.638037 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:22:19.638048 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:22:19.638059 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:22:19.638081 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:22:19.638096 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:22:19.638107 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:22:19.638118 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:22:19.638130 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:22:19.638141 kernel: loop: module loaded Jun 25 16:22:19.638154 kernel: fuse: init (API version 7.37) Jun 25 16:22:19.638165 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:22:19.638176 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:22:19.638187 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:22:19.638198 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:22:19.638209 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:22:19.638220 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:22:19.638233 systemd-journald[1068]: Journal started Jun 25 16:22:19.638274 systemd-journald[1068]: Runtime Journal (/run/log/journal/ee73eefb5bf647cd9941fd5c1c8a6683) is 6.0M, max 48.4M, 42.3M free. Jun 25 16:22:18.599000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:22:19.078000 audit: BPF prog-id=10 op=LOAD Jun 25 16:22:19.078000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:22:19.078000 audit: BPF prog-id=11 op=LOAD Jun 25 16:22:19.078000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:22:19.456000 audit: BPF prog-id=12 op=LOAD Jun 25 16:22:19.456000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:22:19.457000 audit: BPF prog-id=13 op=LOAD Jun 25 16:22:19.457000 audit: BPF prog-id=14 op=LOAD Jun 25 16:22:19.457000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:22:19.457000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:22:19.458000 audit: BPF prog-id=15 op=LOAD Jun 25 16:22:19.458000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:22:19.458000 audit: BPF prog-id=16 op=LOAD Jun 25 16:22:19.458000 audit: BPF prog-id=17 op=LOAD Jun 25 16:22:19.458000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:22:19.458000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:22:19.458000 audit: BPF prog-id=18 op=LOAD Jun 25 16:22:19.458000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:22:19.459000 audit: BPF prog-id=19 op=LOAD Jun 25 16:22:19.459000 audit: BPF prog-id=20 op=LOAD Jun 25 16:22:19.459000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:22:19.459000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:22:19.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.478000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:22:19.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.610000 audit: BPF prog-id=21 op=LOAD Jun 25 16:22:19.610000 audit: BPF prog-id=22 op=LOAD Jun 25 16:22:19.610000 audit: BPF prog-id=23 op=LOAD Jun 25 16:22:19.610000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:22:19.610000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:22:19.635000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:22:19.635000 audit[1068]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd21d020c0 a2=4000 a3=7ffd21d0215c items=0 ppid=1 pid=1068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:19.635000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:22:19.446020 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:22:19.639378 systemd[1]: Stopped verity-setup.service. Jun 25 16:22:19.446031 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 16:22:19.459465 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:22:19.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.643085 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:19.646455 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:22:19.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.647134 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:22:19.648620 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:22:19.650018 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:22:19.651290 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:22:19.652685 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:22:19.654048 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:22:19.655348 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:22:19.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.656995 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:22:19.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.658495 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:22:19.658608 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:22:19.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.660219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:22:19.660325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:22:19.662265 kernel: ACPI: bus type drm_connector registered Jun 25 16:22:19.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.662456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:22:19.662564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:22:19.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.664189 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:22:19.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.664300 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:22:19.665831 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:22:19.665942 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:22:19.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.667441 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:22:19.667548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:22:19.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.669122 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:22:19.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.670753 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:22:19.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.672313 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:22:19.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.674135 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:22:19.684330 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:22:19.687458 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:22:19.688726 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:22:19.690934 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:22:19.695763 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:22:19.697813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:22:19.699461 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:22:19.700706 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:22:19.707557 systemd-journald[1068]: Time spent on flushing to /var/log/journal/ee73eefb5bf647cd9941fd5c1c8a6683 is 13.951ms for 1077 entries. Jun 25 16:22:19.707557 systemd-journald[1068]: System Journal (/var/log/journal/ee73eefb5bf647cd9941fd5c1c8a6683) is 8.0M, max 195.6M, 187.6M free. Jun 25 16:22:19.739519 systemd-journald[1068]: Received client request to flush runtime journal. Jun 25 16:22:19.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.702277 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:22:19.704616 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:22:19.708196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:22:19.710037 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:22:19.741908 udevadm[1096]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 16:22:19.711225 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:22:19.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:19.719280 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:22:19.721028 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:22:19.722524 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:22:19.723843 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:22:19.727430 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:22:19.729746 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:22:19.740678 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:22:19.754295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:22:19.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:20.398058 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:22:20.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:20.420000 audit: BPF prog-id=24 op=LOAD Jun 25 16:22:20.420000 audit: BPF prog-id=25 op=LOAD Jun 25 16:22:20.420000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:22:20.420000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:22:20.435370 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:22:20.450743 systemd-udevd[1102]: Using default interface naming scheme 'v252'. Jun 25 16:22:20.463750 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:22:20.501862 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1106) Jun 25 16:22:20.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:20.498000 audit: BPF prog-id=26 op=LOAD Jun 25 16:22:20.499181 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:22:20.505118 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1105) Jun 25 16:22:20.510000 audit: BPF prog-id=27 op=LOAD Jun 25 16:22:20.510000 audit: BPF prog-id=28 op=LOAD Jun 25 16:22:20.510000 audit: BPF prog-id=29 op=LOAD Jun 25 16:22:20.506307 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:22:20.511736 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:22:20.536095 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 16:22:20.539961 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:22:20.545103 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:22:20.556524 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 16:22:20.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:20.553902 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:22:20.581087 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 16:22:20.610396 systemd-networkd[1115]: lo: Link UP Jun 25 16:22:20.610407 systemd-networkd[1115]: lo: Gained carrier Jun 25 16:22:20.610737 systemd-networkd[1115]: Enumeration completed Jun 25 16:22:20.610808 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:22:20.610895 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:22:20.610902 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:22:20.612458 systemd-networkd[1115]: eth0: Link UP Jun 25 16:22:20.612464 systemd-networkd[1115]: eth0: Gained carrier Jun 25 16:22:20.612473 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:22:20.617094 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:22:20.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:20.629238 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:22:20.629415 systemd-networkd[1115]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 16:22:20.684339 kernel: SVM: TSC scaling supported Jun 25 16:22:20.684440 kernel: kvm: Nested Virtualization enabled Jun 25 16:22:20.684456 kernel: SVM: kvm: Nested Paging enabled Jun 25 16:22:20.684470 kernel: SVM: Virtual VMLOAD VMSAVE supported Jun 25 16:22:20.685287 kernel: SVM: Virtual GIF supported Jun 25 16:22:20.685308 kernel: SVM: LBR virtualization supported Jun 25 16:22:20.725089 kernel: EDAC MC: Ver: 3.0.0 Jun 25 16:22:20.762606 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:22:20.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:20.776330 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:22:20.790896 lvm[1139]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:22:20.819177 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:22:20.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:20.820850 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:22:20.830308 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:22:20.833797 lvm[1140]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:22:20.860033 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:22:20.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:20.868711 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:22:20.870139 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:22:20.870163 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:22:20.871484 systemd[1]: Reached target machines.target - Containers. Jun 25 16:22:20.884282 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:22:20.885921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:22:20.885995 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:20.887221 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:22:20.889399 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:22:20.892290 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:22:20.895173 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:22:20.896588 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1142 (bootctl) Jun 25 16:22:20.897826 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:22:20.908098 kernel: loop0: detected capacity change from 0 to 80584 Jun 25 16:22:20.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:20.909354 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:22:20.923128 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:22:20.958124 kernel: loop1: detected capacity change from 0 to 139360 Jun 25 16:22:21.590249 systemd-fsck[1149]: fsck.fat 4.2 (2021-01-31) Jun 25 16:22:21.590249 systemd-fsck[1149]: /dev/vda1: 808 files, 120378/258078 clusters Jun 25 16:22:21.592891 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:22:21.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:21.601161 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:22:21.607548 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:22:21.609874 kernel: loop2: detected capacity change from 0 to 210664 Jun 25 16:22:21.622251 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:22:21.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:21.649087 kernel: loop3: detected capacity change from 0 to 80584 Jun 25 16:22:21.658111 kernel: loop4: detected capacity change from 0 to 139360 Jun 25 16:22:21.698107 kernel: loop5: detected capacity change from 0 to 210664 Jun 25 16:22:21.702808 (sd-sysext)[1155]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 16:22:21.703413 (sd-sysext)[1155]: Merged extensions into '/usr'. Jun 25 16:22:21.705528 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:22:21.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:21.762392 systemd[1]: Starting ensure-sysext.service... Jun 25 16:22:21.791061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:22:21.802161 systemd-tmpfiles[1157]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:22:21.805833 systemd[1]: Reloading. Jun 25 16:22:21.841976 systemd-tmpfiles[1157]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:22:21.842464 systemd-tmpfiles[1157]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:22:21.843242 systemd-tmpfiles[1157]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:22:21.971588 ldconfig[1141]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:22:21.990988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:22:22.052000 audit: BPF prog-id=30 op=LOAD Jun 25 16:22:22.052000 audit: BPF prog-id=26 op=UNLOAD Jun 25 16:22:22.053000 audit: BPF prog-id=31 op=LOAD Jun 25 16:22:22.053000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:22:22.053000 audit: BPF prog-id=32 op=LOAD Jun 25 16:22:22.053000 audit: BPF prog-id=33 op=LOAD Jun 25 16:22:22.053000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:22:22.053000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:22:22.054000 audit: BPF prog-id=34 op=LOAD Jun 25 16:22:22.054000 audit: BPF prog-id=35 op=LOAD Jun 25 16:22:22.054000 audit: BPF prog-id=24 op=UNLOAD Jun 25 16:22:22.054000 audit: BPF prog-id=25 op=UNLOAD Jun 25 16:22:22.055000 audit: BPF prog-id=36 op=LOAD Jun 25 16:22:22.055000 audit: BPF prog-id=27 op=UNLOAD Jun 25 16:22:22.055000 audit: BPF prog-id=37 op=LOAD Jun 25 16:22:22.055000 audit: BPF prog-id=38 op=LOAD Jun 25 16:22:22.055000 audit: BPF prog-id=28 op=UNLOAD Jun 25 16:22:22.055000 audit: BPF prog-id=29 op=UNLOAD Jun 25 16:22:22.057369 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:22:22.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.111185 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:22:22.133248 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:22:22.136224 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:22:22.139868 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:22:22.138000 audit: BPF prog-id=39 op=LOAD Jun 25 16:22:22.143651 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:22:22.142000 audit: BPF prog-id=40 op=LOAD Jun 25 16:22:22.147239 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:22:22.153028 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:22.153287 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:22:22.155058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:22:22.157628 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:22:22.160029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:22:22.161284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:22:22.161411 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:22.161520 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:22.162691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:22:22.162832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:22:22.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.164593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:22:22.164722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:22:22.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.166388 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:22:22.166503 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:22:22.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.168208 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:22:22.168345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:22:22.170136 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:22.170368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:22:22.181412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:22:22.194684 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:22:22.197103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:22:22.198301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:22:22.198430 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:22.198549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:22.199503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:22:22.199665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:22:22.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.201349 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:22:22.201477 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:22:22.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.203104 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:22:22.203226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:22:22.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:22.204799 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:22:22.204935 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:22:22.207039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:22.207410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:22:22.222429 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:22:22.249893 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:22:22.252193 augenrules[1241]: No rules Jun 25 16:22:22.252000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:22:22.252000 audit[1241]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd85db0e80 a2=420 a3=0 items=0 ppid=1214 pid=1241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:22.252000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:22:22.252478 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:22:22.255191 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:22:22.256529 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:22:22.256658 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:22.256834 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:22:22.258252 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:22:22.258427 systemd-timesyncd[1231]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 16:22:22.258464 systemd-timesyncd[1231]: Initial clock synchronization to Tue 2024-06-25 16:22:22.314666 UTC. Jun 25 16:22:22.259725 systemd-resolved[1228]: Positive Trust Anchors: Jun 25 16:22:22.259748 systemd-resolved[1228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:22:22.259787 systemd-resolved[1228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:22:22.261213 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:22:22.263188 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:22:22.264734 systemd-resolved[1228]: Defaulting to hostname 'linux'. Jun 25 16:22:22.265039 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:22:22.265188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:22:22.267039 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:22:22.269275 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:22:22.269419 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:22:22.271436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:22:22.271580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:22:22.273446 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:22:22.273587 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:22:22.277969 systemd[1]: Reached target network.target - Network. Jun 25 16:22:22.305271 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:22:22.306514 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:22:22.307624 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:22:22.307737 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:22:22.308337 systemd[1]: Finished ensure-sysext.service. Jun 25 16:22:22.310411 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:22:22.318177 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:22:22.319527 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:22:22.320160 systemd-networkd[1115]: eth0: Gained IPv6LL Jun 25 16:22:22.321405 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:22:22.322731 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:22:22.410621 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:22:22.475278 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:22:22.484325 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:22:22.541605 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:22:22.542802 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:22:22.543942 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:22:22.545296 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:22:22.546542 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:22:22.547648 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:22:22.548777 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:22:22.548803 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:22:22.549696 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:22:22.551001 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:22:22.553179 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:22:22.566173 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:22:22.567272 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:22.567583 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:22:22.568677 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:22:22.630138 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:22:22.631132 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:22:22.631155 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:22:22.632036 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:22:22.633974 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 16:22:22.636036 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:22:22.638051 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:22:22.640117 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:22:22.641138 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:22:22.664384 jq[1256]: false Jun 25 16:22:22.667185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:22.669578 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:22:22.671838 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:22:22.673710 extend-filesystems[1257]: Found loop3 Jun 25 16:22:22.705533 extend-filesystems[1257]: Found loop4 Jun 25 16:22:22.705533 extend-filesystems[1257]: Found loop5 Jun 25 16:22:22.705533 extend-filesystems[1257]: Found sr0 Jun 25 16:22:22.705533 extend-filesystems[1257]: Found vda Jun 25 16:22:22.705533 extend-filesystems[1257]: Found vda1 Jun 25 16:22:22.705533 extend-filesystems[1257]: Found vda2 Jun 25 16:22:22.705533 extend-filesystems[1257]: Found vda3 Jun 25 16:22:22.705533 extend-filesystems[1257]: Found usr Jun 25 16:22:22.705533 extend-filesystems[1257]: Found vda4 Jun 25 16:22:22.705533 extend-filesystems[1257]: Found vda6 Jun 25 16:22:22.705533 extend-filesystems[1257]: Found vda7 Jun 25 16:22:22.705533 extend-filesystems[1257]: Found vda9 Jun 25 16:22:22.705533 extend-filesystems[1257]: Checking size of /dev/vda9 Jun 25 16:22:22.704433 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:22:22.723242 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:22:22.725263 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:22:22.728405 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:22:22.729506 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:22:22.729569 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:22:22.729902 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:22:22.730817 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:22:22.733049 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:22:22.735674 jq[1280]: true Jun 25 16:22:22.779235 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:22:22.779423 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:22:22.780539 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:22:22.780682 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:22:22.782040 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:22:22.784049 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:22:22.784288 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:22:22.790285 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 16:22:22.790436 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 16:22:22.791026 jq[1285]: true Jun 25 16:22:22.792312 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:22:22.796613 dbus-daemon[1255]: [system] SELinux support is enabled Jun 25 16:22:22.797741 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:22:22.800587 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:22:22.800610 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:22:22.801853 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:22:22.801871 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:22:22.811225 tar[1284]: linux-amd64/helm Jun 25 16:22:22.819943 update_engine[1279]: I0625 16:22:22.811528 1279 main.cc:92] Flatcar Update Engine starting Jun 25 16:22:22.819943 update_engine[1279]: I0625 16:22:22.813483 1279 update_check_scheduler.cc:74] Next update check in 7m15s Jun 25 16:22:22.820238 extend-filesystems[1257]: Resized partition /dev/vda9 Jun 25 16:22:22.812619 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:22:22.819241 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:22:22.855092 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1111) Jun 25 16:22:22.867109 extend-filesystems[1310]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 16:22:22.873641 systemd-logind[1277]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:22:22.873665 systemd-logind[1277]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:22:22.875208 systemd-logind[1277]: New seat seat0. Jun 25 16:22:22.876959 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:22:22.909092 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 16:22:22.911725 locksmithd[1305]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:22:22.952686 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:22:22.953434 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:22:23.534099 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 16:22:24.329719 containerd[1286]: time="2024-06-25T16:22:24.329636857Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:22:24.330407 extend-filesystems[1310]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 16:22:24.330407 extend-filesystems[1310]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 16:22:24.330407 extend-filesystems[1310]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 16:22:24.337580 extend-filesystems[1257]: Resized filesystem in /dev/vda9 Jun 25 16:22:24.333501 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:22:24.340586 sshd_keygen[1301]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:22:24.340659 bash[1308]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:22:24.335021 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:22:24.340159 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:22:24.342121 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 16:22:24.358101 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:22:24.361558 containerd[1286]: time="2024-06-25T16:22:24.361507085Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:22:24.361635 containerd[1286]: time="2024-06-25T16:22:24.361580360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:24.363046 containerd[1286]: time="2024-06-25T16:22:24.363006920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:22:24.363046 containerd[1286]: time="2024-06-25T16:22:24.363038563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:24.363328 containerd[1286]: time="2024-06-25T16:22:24.363307076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:22:24.363356 containerd[1286]: time="2024-06-25T16:22:24.363327620Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:22:24.363409 containerd[1286]: time="2024-06-25T16:22:24.363394030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:24.363468 containerd[1286]: time="2024-06-25T16:22:24.363451651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:22:24.363498 containerd[1286]: time="2024-06-25T16:22:24.363468445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:24.363590 containerd[1286]: time="2024-06-25T16:22:24.363566679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:24.363824 containerd[1286]: time="2024-06-25T16:22:24.363804377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:24.363858 containerd[1286]: time="2024-06-25T16:22:24.363826241Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:22:24.363858 containerd[1286]: time="2024-06-25T16:22:24.363835687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:22:24.363944 containerd[1286]: time="2024-06-25T16:22:24.363926734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:22:24.363944 containerd[1286]: time="2024-06-25T16:22:24.363942056Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:22:24.363995 containerd[1286]: time="2024-06-25T16:22:24.363983366Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:22:24.364016 containerd[1286]: time="2024-06-25T16:22:24.363994394Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:22:24.365406 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:22:24.370886 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:22:24.371044 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:22:24.373449 containerd[1286]: time="2024-06-25T16:22:24.373420272Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:22:24.373536 containerd[1286]: time="2024-06-25T16:22:24.373524060Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:22:24.373589 containerd[1286]: time="2024-06-25T16:22:24.373578062Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:22:24.373673 containerd[1286]: time="2024-06-25T16:22:24.373659491Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:22:24.373800 containerd[1286]: time="2024-06-25T16:22:24.373788804Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:22:24.373852 containerd[1286]: time="2024-06-25T16:22:24.373843430Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:22:24.373896 containerd[1286]: time="2024-06-25T16:22:24.373886968Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:22:24.374065 containerd[1286]: time="2024-06-25T16:22:24.374045030Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:22:24.374179 containerd[1286]: time="2024-06-25T16:22:24.374161742Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:22:24.374237 containerd[1286]: time="2024-06-25T16:22:24.374226730Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:22:24.374296 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:22:24.374528 containerd[1286]: time="2024-06-25T16:22:24.374512523Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:22:24.374600 containerd[1286]: time="2024-06-25T16:22:24.374589326Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:22:24.374651 containerd[1286]: time="2024-06-25T16:22:24.374642370Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:22:24.374697 containerd[1286]: time="2024-06-25T16:22:24.374687944Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:22:24.374746 containerd[1286]: time="2024-06-25T16:22:24.374736652Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:22:24.374790 containerd[1286]: time="2024-06-25T16:22:24.374781390Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:22:24.374833 containerd[1286]: time="2024-06-25T16:22:24.374825199Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:22:24.374877 containerd[1286]: time="2024-06-25T16:22:24.374868898Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:22:24.374926 containerd[1286]: time="2024-06-25T16:22:24.374916589Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:22:24.375049 containerd[1286]: time="2024-06-25T16:22:24.375036577Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:22:24.375627 containerd[1286]: time="2024-06-25T16:22:24.375613563Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:22:24.375693 containerd[1286]: time="2024-06-25T16:22:24.375682746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.375751 containerd[1286]: time="2024-06-25T16:22:24.375740809Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:22:24.375875 containerd[1286]: time="2024-06-25T16:22:24.375845727Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:22:24.376002 containerd[1286]: time="2024-06-25T16:22:24.375989757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.376150 containerd[1286]: time="2024-06-25T16:22:24.376133434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.376222 containerd[1286]: time="2024-06-25T16:22:24.376211266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.376277 containerd[1286]: time="2024-06-25T16:22:24.376266335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.376327 containerd[1286]: time="2024-06-25T16:22:24.376317796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.376374 containerd[1286]: time="2024-06-25T16:22:24.376365245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.376417 containerd[1286]: time="2024-06-25T16:22:24.376408592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.376459 containerd[1286]: time="2024-06-25T16:22:24.376450798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.376519 containerd[1286]: time="2024-06-25T16:22:24.376496645Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:22:24.377360 containerd[1286]: time="2024-06-25T16:22:24.376618084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.377360 containerd[1286]: time="2024-06-25T16:22:24.376643396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.377360 containerd[1286]: time="2024-06-25T16:22:24.376654475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.377360 containerd[1286]: time="2024-06-25T16:22:24.376670986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.377360 containerd[1286]: time="2024-06-25T16:22:24.376683567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.377360 containerd[1286]: time="2024-06-25T16:22:24.376698093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.377360 containerd[1286]: time="2024-06-25T16:22:24.376708607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.377360 containerd[1286]: time="2024-06-25T16:22:24.376718787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:22:24.377530 tar[1284]: linux-amd64/LICENSE Jun 25 16:22:24.377530 tar[1284]: linux-amd64/README.md Jun 25 16:22:24.377727 containerd[1286]: time="2024-06-25T16:22:24.376941224Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:22:24.377727 containerd[1286]: time="2024-06-25T16:22:24.377001475Z" level=info msg="Connect containerd service" Jun 25 16:22:24.377727 containerd[1286]: time="2024-06-25T16:22:24.377021192Z" level=info msg="using legacy CRI server" Jun 25 16:22:24.377727 containerd[1286]: time="2024-06-25T16:22:24.377026344Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:22:24.377727 containerd[1286]: time="2024-06-25T16:22:24.377043571Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:22:24.377727 containerd[1286]: time="2024-06-25T16:22:24.377604822Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:22:24.377727 containerd[1286]: time="2024-06-25T16:22:24.377647019Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:22:24.377727 containerd[1286]: time="2024-06-25T16:22:24.377662220Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:22:24.377727 containerd[1286]: time="2024-06-25T16:22:24.377671212Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:22:24.377727 containerd[1286]: time="2024-06-25T16:22:24.377707431Z" level=info msg="Start subscribing containerd event" Jun 25 16:22:24.378249 containerd[1286]: time="2024-06-25T16:22:24.377746896Z" level=info msg="Start recovering state" Jun 25 16:22:24.378249 containerd[1286]: time="2024-06-25T16:22:24.377792874Z" level=info msg="Start event monitor" Jun 25 16:22:24.378249 containerd[1286]: time="2024-06-25T16:22:24.377801290Z" level=info msg="Start snapshots syncer" Jun 25 16:22:24.378249 containerd[1286]: time="2024-06-25T16:22:24.377808629Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:22:24.378249 containerd[1286]: time="2024-06-25T16:22:24.377815555Z" level=info msg="Start streaming server" Jun 25 16:22:24.378249 containerd[1286]: time="2024-06-25T16:22:24.378004775Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:22:24.378355 containerd[1286]: time="2024-06-25T16:22:24.378279197Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:22:24.378355 containerd[1286]: time="2024-06-25T16:22:24.378325769Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:22:24.380031 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:22:24.382369 containerd[1286]: time="2024-06-25T16:22:24.380376844Z" level=info msg="containerd successfully booted in 0.764803s" Jun 25 16:22:24.382251 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:22:24.385162 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:22:24.393694 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:22:24.396821 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:22:24.398426 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:22:24.928631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:24.930918 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:22:24.934422 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:22:24.941412 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:22:24.941559 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:22:24.943006 systemd[1]: Startup finished in 781ms (kernel) + 5.889s (initrd) + 6.381s (userspace) = 13.051s. Jun 25 16:22:25.358322 kubelet[1341]: E0625 16:22:25.358187 1341 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:22:25.359829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:22:25.360008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:22:31.251348 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:22:31.252395 systemd[1]: Started sshd@0-10.0.0.90:22-10.0.0.1:60842.service - OpenSSH per-connection server daemon (10.0.0.1:60842). Jun 25 16:22:31.286235 sshd[1351]: Accepted publickey for core from 10.0.0.1 port 60842 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:22:31.287797 sshd[1351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:31.295830 systemd-logind[1277]: New session 1 of user core. Jun 25 16:22:31.296962 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:22:31.305447 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:22:31.314797 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:22:31.316377 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:22:31.319128 (systemd)[1354]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:31.416818 systemd[1354]: Queued start job for default target default.target. Jun 25 16:22:31.431684 systemd[1354]: Reached target paths.target - Paths. Jun 25 16:22:31.431721 systemd[1354]: Reached target sockets.target - Sockets. Jun 25 16:22:31.431733 systemd[1354]: Reached target timers.target - Timers. Jun 25 16:22:31.431743 systemd[1354]: Reached target basic.target - Basic System. Jun 25 16:22:31.431796 systemd[1354]: Reached target default.target - Main User Target. Jun 25 16:22:31.431820 systemd[1354]: Startup finished in 107ms. Jun 25 16:22:31.431926 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:22:31.433410 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:22:31.510560 systemd[1]: Started sshd@1-10.0.0.90:22-10.0.0.1:60852.service - OpenSSH per-connection server daemon (10.0.0.1:60852). Jun 25 16:22:31.545196 sshd[1363]: Accepted publickey for core from 10.0.0.1 port 60852 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:22:31.546467 sshd[1363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:31.550002 systemd-logind[1277]: New session 2 of user core. Jun 25 16:22:31.562258 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:22:31.614445 sshd[1363]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:31.629003 systemd[1]: sshd@1-10.0.0.90:22-10.0.0.1:60852.service: Deactivated successfully. Jun 25 16:22:31.629518 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 16:22:31.629921 systemd-logind[1277]: Session 2 logged out. Waiting for processes to exit. Jun 25 16:22:31.630908 systemd[1]: Started sshd@2-10.0.0.90:22-10.0.0.1:60860.service - OpenSSH per-connection server daemon (10.0.0.1:60860). Jun 25 16:22:31.631611 systemd-logind[1277]: Removed session 2. Jun 25 16:22:31.662413 sshd[1369]: Accepted publickey for core from 10.0.0.1 port 60860 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:22:31.663452 sshd[1369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:31.666619 systemd-logind[1277]: New session 3 of user core. Jun 25 16:22:31.682254 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:22:31.731652 sshd[1369]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:31.740867 systemd[1]: sshd@2-10.0.0.90:22-10.0.0.1:60860.service: Deactivated successfully. Jun 25 16:22:31.741424 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 16:22:31.741931 systemd-logind[1277]: Session 3 logged out. Waiting for processes to exit. Jun 25 16:22:31.743058 systemd[1]: Started sshd@3-10.0.0.90:22-10.0.0.1:60872.service - OpenSSH per-connection server daemon (10.0.0.1:60872). Jun 25 16:22:31.743791 systemd-logind[1277]: Removed session 3. Jun 25 16:22:31.774444 sshd[1375]: Accepted publickey for core from 10.0.0.1 port 60872 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:22:31.775555 sshd[1375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:31.778666 systemd-logind[1277]: New session 4 of user core. Jun 25 16:22:31.792247 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:22:31.845299 sshd[1375]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:31.856546 systemd[1]: sshd@3-10.0.0.90:22-10.0.0.1:60872.service: Deactivated successfully. Jun 25 16:22:31.857170 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:22:31.857741 systemd-logind[1277]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:22:31.858926 systemd[1]: Started sshd@4-10.0.0.90:22-10.0.0.1:60876.service - OpenSSH per-connection server daemon (10.0.0.1:60876). Jun 25 16:22:31.859657 systemd-logind[1277]: Removed session 4. Jun 25 16:22:31.893015 sshd[1381]: Accepted publickey for core from 10.0.0.1 port 60876 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:22:31.894284 sshd[1381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:31.902005 systemd-logind[1277]: New session 5 of user core. Jun 25 16:22:31.915387 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:22:31.975002 sudo[1384]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:22:31.975250 sudo[1384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:22:31.989784 sudo[1384]: pam_unix(sudo:session): session closed for user root Jun 25 16:22:31.991465 sshd[1381]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:32.001881 systemd[1]: sshd@4-10.0.0.90:22-10.0.0.1:60876.service: Deactivated successfully. Jun 25 16:22:32.002532 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:22:32.003119 systemd-logind[1277]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:22:32.004784 systemd[1]: Started sshd@5-10.0.0.90:22-10.0.0.1:60886.service - OpenSSH per-connection server daemon (10.0.0.1:60886). Jun 25 16:22:32.005469 systemd-logind[1277]: Removed session 5. Jun 25 16:22:32.039316 sshd[1388]: Accepted publickey for core from 10.0.0.1 port 60886 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:22:32.040980 sshd[1388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:32.045380 systemd-logind[1277]: New session 6 of user core. Jun 25 16:22:32.055321 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:22:32.110135 sudo[1392]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:22:32.110409 sudo[1392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:22:32.113299 sudo[1392]: pam_unix(sudo:session): session closed for user root Jun 25 16:22:32.118271 sudo[1391]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:22:32.118514 sudo[1391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:22:32.135377 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:22:32.135000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:22:32.137109 auditctl[1395]: No rules Jun 25 16:22:32.137482 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:22:32.137614 kernel: kauditd_printk_skb: 152 callbacks suppressed Jun 25 16:22:32.137643 kernel: audit: type=1305 audit(1719332552.135:196): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:22:32.137693 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:22:32.135000 audit[1395]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9d7e1500 a2=420 a3=0 items=0 ppid=1 pid=1395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:32.140310 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:22:32.143254 kernel: audit: type=1300 audit(1719332552.135:196): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9d7e1500 a2=420 a3=0 items=0 ppid=1 pid=1395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:32.143298 kernel: audit: type=1327 audit(1719332552.135:196): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:22:32.135000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:22:32.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.146923 kernel: audit: type=1131 audit(1719332552.136:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.160950 augenrules[1412]: No rules Jun 25 16:22:32.161829 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:22:32.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.162971 sudo[1391]: pam_unix(sudo:session): session closed for user root Jun 25 16:22:32.161000 audit[1391]: USER_END pid=1391 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.165105 sshd[1388]: pam_unix(sshd:session): session closed for user core Jun 25 16:22:32.167801 kernel: audit: type=1130 audit(1719332552.160:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.167862 kernel: audit: type=1106 audit(1719332552.161:199): pid=1391 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.167883 kernel: audit: type=1104 audit(1719332552.162:200): pid=1391 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.162000 audit[1391]: CRED_DISP pid=1391 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.165000 audit[1388]: USER_END pid=1388 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:22:32.173672 kernel: audit: type=1106 audit(1719332552.165:201): pid=1388 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:22:32.173706 kernel: audit: type=1104 audit(1719332552.165:202): pid=1388 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:22:32.165000 audit[1388]: CRED_DISP pid=1388 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:22:32.188413 systemd[1]: sshd@5-10.0.0.90:22-10.0.0.1:60886.service: Deactivated successfully. Jun 25 16:22:32.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.90:22-10.0.0.1:60886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.188968 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:22:32.189572 systemd-logind[1277]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:22:32.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.90:22-10.0.0.1:60902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.190594 systemd[1]: Started sshd@6-10.0.0.90:22-10.0.0.1:60902.service - OpenSSH per-connection server daemon (10.0.0.1:60902). Jun 25 16:22:32.191352 systemd-logind[1277]: Removed session 6. Jun 25 16:22:32.192111 kernel: audit: type=1131 audit(1719332552.187:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.90:22-10.0.0.1:60886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.222000 audit[1418]: USER_ACCT pid=1418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:22:32.224268 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 60902 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:22:32.223000 audit[1418]: CRED_ACQ pid=1418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:22:32.223000 audit[1418]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc63406e0 a2=3 a3=7ff0bc891480 items=0 ppid=1 pid=1418 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:32.223000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:22:32.225195 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:22:32.228506 systemd-logind[1277]: New session 7 of user core. Jun 25 16:22:32.235242 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:22:32.237000 audit[1418]: USER_START pid=1418 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:22:32.239000 audit[1420]: CRED_ACQ pid=1420 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:22:32.285000 audit[1421]: USER_ACCT pid=1421 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.285000 audit[1421]: CRED_REFR pid=1421 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.286846 sudo[1421]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:22:32.287135 sudo[1421]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:22:32.287000 audit[1421]: USER_START pid=1421 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:22:32.379451 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:22:32.610784 dockerd[1431]: time="2024-06-25T16:22:32.610713209Z" level=info msg="Starting up" Jun 25 16:22:33.073193 dockerd[1431]: time="2024-06-25T16:22:33.073117589Z" level=info msg="Loading containers: start." Jun 25 16:22:33.128000 audit[1466]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.128000 audit[1466]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffffff9a400 a2=0 a3=7febb4801e90 items=0 ppid=1431 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.128000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:22:33.130000 audit[1468]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.130000 audit[1468]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff7735f990 a2=0 a3=7febf32d7e90 items=0 ppid=1431 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.130000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:22:33.132000 audit[1470]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.132000 audit[1470]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdb9fc2d40 a2=0 a3=7f0a1ae0ae90 items=0 ppid=1431 pid=1470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.132000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:22:33.134000 audit[1472]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.134000 audit[1472]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc5c18c050 a2=0 a3=7fef06cb2e90 items=0 ppid=1431 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.134000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:22:33.138000 audit[1474]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.138000 audit[1474]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf8463ca0 a2=0 a3=7fabd96f3e90 items=0 ppid=1431 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.138000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:22:33.141000 audit[1476]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.141000 audit[1476]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffe9928650 a2=0 a3=7fecd787ee90 items=0 ppid=1431 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.141000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:22:33.153000 audit[1478]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.153000 audit[1478]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffedcd1b080 a2=0 a3=7f73f47f6e90 items=0 ppid=1431 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.153000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:22:33.155000 audit[1480]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.155000 audit[1480]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcacd8f170 a2=0 a3=7f50bf3b6e90 items=0 ppid=1431 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.155000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:22:33.157000 audit[1482]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.157000 audit[1482]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcb67d8760 a2=0 a3=7f390e0c7e90 items=0 ppid=1431 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.157000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:22:33.170000 audit[1486]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.170000 audit[1486]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe88c3c800 a2=0 a3=7f1fdc671e90 items=0 ppid=1431 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.170000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:22:33.171000 audit[1487]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.171000 audit[1487]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd86807f40 a2=0 a3=7f52d8606e90 items=0 ppid=1431 pid=1487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.171000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:22:33.182124 kernel: Initializing XFRM netlink socket Jun 25 16:22:33.221000 audit[1495]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.221000 audit[1495]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fffa1a5e090 a2=0 a3=7f42bb7e2e90 items=0 ppid=1431 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.221000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:22:33.236000 audit[1498]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.236000 audit[1498]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc21a3c280 a2=0 a3=7fb908784e90 items=0 ppid=1431 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.236000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:22:33.240000 audit[1502]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.240000 audit[1502]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc7bf7dfb0 a2=0 a3=7f9707fe6e90 items=0 ppid=1431 pid=1502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.240000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:22:33.242000 audit[1504]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.242000 audit[1504]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffeb2e424e0 a2=0 a3=7f5447e5ce90 items=0 ppid=1431 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.242000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:22:33.244000 audit[1506]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.244000 audit[1506]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffce67a61d0 a2=0 a3=7f423032ee90 items=0 ppid=1431 pid=1506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.244000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:22:33.247000 audit[1508]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.247000 audit[1508]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffcc0c86710 a2=0 a3=7f551d14fe90 items=0 ppid=1431 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.247000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:22:33.249000 audit[1510]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.249000 audit[1510]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd1633a560 a2=0 a3=7f76cea15e90 items=0 ppid=1431 pid=1510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.249000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:22:33.255000 audit[1513]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.255000 audit[1513]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff9a1a8c00 a2=0 a3=7fcf2174fe90 items=0 ppid=1431 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.255000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:22:33.257000 audit[1515]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.257000 audit[1515]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffffb7ee9b0 a2=0 a3=7ff47d84be90 items=0 ppid=1431 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.257000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:22:33.259000 audit[1517]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.259000 audit[1517]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcafd19740 a2=0 a3=7faa69453e90 items=0 ppid=1431 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.259000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:22:33.261000 audit[1519]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.261000 audit[1519]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd21551650 a2=0 a3=7f7db7d2fe90 items=0 ppid=1431 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.261000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:22:33.262902 systemd-networkd[1115]: docker0: Link UP Jun 25 16:22:33.272000 audit[1523]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.272000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd5e3db270 a2=0 a3=7f781ced6e90 items=0 ppid=1431 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.272000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:22:33.273000 audit[1524]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:33.273000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe0cf261d0 a2=0 a3=7f47334d5e90 items=0 ppid=1431 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:33.273000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:22:33.274203 dockerd[1431]: time="2024-06-25T16:22:33.274166863Z" level=info msg="Loading containers: done." Jun 25 16:22:33.363759 dockerd[1431]: time="2024-06-25T16:22:33.363585859Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:22:33.363959 dockerd[1431]: time="2024-06-25T16:22:33.363855773Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:22:33.364091 dockerd[1431]: time="2024-06-25T16:22:33.363991116Z" level=info msg="Daemon has completed initialization" Jun 25 16:22:33.401779 dockerd[1431]: time="2024-06-25T16:22:33.401708020Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:22:33.401915 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:22:33.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:33.950148 containerd[1286]: time="2024-06-25T16:22:33.950046224Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 16:22:34.740529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042650632.mount: Deactivated successfully. Jun 25 16:22:35.610855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:22:35.611019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:35.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:35.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:35.620299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:35.706375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:35.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:35.945617 kubelet[1631]: E0625 16:22:35.945492 1631 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:22:35.948462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:22:35.948588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:22:35.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:22:36.944742 containerd[1286]: time="2024-06-25T16:22:36.944665876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:36.974326 containerd[1286]: time="2024-06-25T16:22:36.974222964Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jun 25 16:22:37.013334 containerd[1286]: time="2024-06-25T16:22:37.013259876Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:37.068881 containerd[1286]: time="2024-06-25T16:22:37.068814120Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:37.125032 containerd[1286]: time="2024-06-25T16:22:37.124961529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:37.126164 containerd[1286]: time="2024-06-25T16:22:37.126107749Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 3.175967035s" Jun 25 16:22:37.126164 containerd[1286]: time="2024-06-25T16:22:37.126151167Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jun 25 16:22:37.146724 containerd[1286]: time="2024-06-25T16:22:37.146672264Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 16:22:39.510439 containerd[1286]: time="2024-06-25T16:22:39.510370109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:39.511268 containerd[1286]: time="2024-06-25T16:22:39.511217079Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jun 25 16:22:39.512664 containerd[1286]: time="2024-06-25T16:22:39.512614918Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:39.514701 containerd[1286]: time="2024-06-25T16:22:39.514661510Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:39.517052 containerd[1286]: time="2024-06-25T16:22:39.517028419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:39.519954 containerd[1286]: time="2024-06-25T16:22:39.519896201Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 2.37317626s" Jun 25 16:22:39.520007 containerd[1286]: time="2024-06-25T16:22:39.519955000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jun 25 16:22:39.541418 containerd[1286]: time="2024-06-25T16:22:39.541365382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 16:22:41.553463 containerd[1286]: time="2024-06-25T16:22:41.553401307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:41.554209 containerd[1286]: time="2024-06-25T16:22:41.554175038Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jun 25 16:22:41.555718 containerd[1286]: time="2024-06-25T16:22:41.555687143Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:41.557426 containerd[1286]: time="2024-06-25T16:22:41.557389605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:41.559562 containerd[1286]: time="2024-06-25T16:22:41.559532060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:41.560713 containerd[1286]: time="2024-06-25T16:22:41.560671620Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 2.019250571s" Jun 25 16:22:41.560767 containerd[1286]: time="2024-06-25T16:22:41.560712804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jun 25 16:22:41.581930 containerd[1286]: time="2024-06-25T16:22:41.581882735Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 16:22:43.771777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665565774.mount: Deactivated successfully. Jun 25 16:22:44.277613 containerd[1286]: time="2024-06-25T16:22:44.277555648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:44.278430 containerd[1286]: time="2024-06-25T16:22:44.278359376Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jun 25 16:22:44.279670 containerd[1286]: time="2024-06-25T16:22:44.279641595Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:44.281430 containerd[1286]: time="2024-06-25T16:22:44.281400841Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:44.282873 containerd[1286]: time="2024-06-25T16:22:44.282811716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:44.283424 containerd[1286]: time="2024-06-25T16:22:44.283379161Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 2.701224074s" Jun 25 16:22:44.283479 containerd[1286]: time="2024-06-25T16:22:44.283421439Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jun 25 16:22:44.303668 containerd[1286]: time="2024-06-25T16:22:44.303610401Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 16:22:44.859516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378757901.mount: Deactivated successfully. Jun 25 16:22:45.706331 containerd[1286]: time="2024-06-25T16:22:45.706265901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:45.707244 containerd[1286]: time="2024-06-25T16:22:45.707169180Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 25 16:22:45.709018 containerd[1286]: time="2024-06-25T16:22:45.708988878Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:45.711208 containerd[1286]: time="2024-06-25T16:22:45.711173998Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:45.713393 containerd[1286]: time="2024-06-25T16:22:45.713347241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:45.714526 containerd[1286]: time="2024-06-25T16:22:45.714493687Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.410841049s" Jun 25 16:22:45.714584 containerd[1286]: time="2024-06-25T16:22:45.714534088Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 16:22:45.735051 containerd[1286]: time="2024-06-25T16:22:45.735005602Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:22:46.199384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:22:46.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.199591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:46.200490 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 16:22:46.200547 kernel: audit: type=1130 audit(1719332566.198:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.203108 kernel: audit: type=1131 audit(1719332566.198:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.211334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:46.298673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:46.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.309127 kernel: audit: type=1130 audit(1719332566.297:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:46.705967 kubelet[1734]: E0625 16:22:46.705913 1734 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:22:46.707494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:22:46.707613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:22:46.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:22:46.711090 kernel: audit: type=1131 audit(1719332566.706:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:22:47.783879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount577212761.mount: Deactivated successfully. Jun 25 16:22:47.790108 containerd[1286]: time="2024-06-25T16:22:47.790038833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:47.791023 containerd[1286]: time="2024-06-25T16:22:47.790960433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:22:47.792293 containerd[1286]: time="2024-06-25T16:22:47.792237991Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:47.794131 containerd[1286]: time="2024-06-25T16:22:47.794109714Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:47.795890 containerd[1286]: time="2024-06-25T16:22:47.795861167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:47.796557 containerd[1286]: time="2024-06-25T16:22:47.796505648Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 2.061454485s" Jun 25 16:22:47.796612 containerd[1286]: time="2024-06-25T16:22:47.796552619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:22:47.814997 containerd[1286]: time="2024-06-25T16:22:47.814939337Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 16:22:48.708546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3366570464.mount: Deactivated successfully. Jun 25 16:22:51.869593 containerd[1286]: time="2024-06-25T16:22:51.869513487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:51.901001 containerd[1286]: time="2024-06-25T16:22:51.900897893Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jun 25 16:22:51.917060 containerd[1286]: time="2024-06-25T16:22:51.917007203Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:51.961227 containerd[1286]: time="2024-06-25T16:22:51.961144217Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:51.985815 containerd[1286]: time="2024-06-25T16:22:51.985748781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:22:51.987533 containerd[1286]: time="2024-06-25T16:22:51.987474434Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.172484139s" Jun 25 16:22:51.987533 containerd[1286]: time="2024-06-25T16:22:51.987532512Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jun 25 16:22:54.524609 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:54.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:54.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:54.530079 kernel: audit: type=1130 audit(1719332574.523:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:54.530129 kernel: audit: type=1131 audit(1719332574.523:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:54.541480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:54.559279 systemd[1]: Reloading. Jun 25 16:22:54.831436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:22:54.883000 audit: BPF prog-id=44 op=LOAD Jun 25 16:22:54.883000 audit: BPF prog-id=39 op=UNLOAD Jun 25 16:22:54.887850 kernel: audit: type=1334 audit(1719332574.883:248): prog-id=44 op=LOAD Jun 25 16:22:54.887899 kernel: audit: type=1334 audit(1719332574.883:249): prog-id=39 op=UNLOAD Jun 25 16:22:54.893191 kernel: audit: type=1334 audit(1719332574.887:250): prog-id=45 op=LOAD Jun 25 16:22:54.893323 kernel: audit: type=1334 audit(1719332574.887:251): prog-id=30 op=UNLOAD Jun 25 16:22:54.893342 kernel: audit: type=1334 audit(1719332574.887:252): prog-id=46 op=LOAD Jun 25 16:22:54.893365 kernel: audit: type=1334 audit(1719332574.887:253): prog-id=40 op=UNLOAD Jun 25 16:22:54.893379 kernel: audit: type=1334 audit(1719332574.888:254): prog-id=47 op=LOAD Jun 25 16:22:54.893394 kernel: audit: type=1334 audit(1719332574.888:255): prog-id=31 op=UNLOAD Jun 25 16:22:54.887000 audit: BPF prog-id=45 op=LOAD Jun 25 16:22:54.887000 audit: BPF prog-id=30 op=UNLOAD Jun 25 16:22:54.887000 audit: BPF prog-id=46 op=LOAD Jun 25 16:22:54.887000 audit: BPF prog-id=40 op=UNLOAD Jun 25 16:22:54.888000 audit: BPF prog-id=47 op=LOAD Jun 25 16:22:54.888000 audit: BPF prog-id=31 op=UNLOAD Jun 25 16:22:54.889000 audit: BPF prog-id=48 op=LOAD Jun 25 16:22:54.889000 audit: BPF prog-id=49 op=LOAD Jun 25 16:22:54.889000 audit: BPF prog-id=32 op=UNLOAD Jun 25 16:22:54.889000 audit: BPF prog-id=33 op=UNLOAD Jun 25 16:22:54.890000 audit: BPF prog-id=50 op=LOAD Jun 25 16:22:54.890000 audit: BPF prog-id=41 op=UNLOAD Jun 25 16:22:54.890000 audit: BPF prog-id=51 op=LOAD Jun 25 16:22:54.890000 audit: BPF prog-id=52 op=LOAD Jun 25 16:22:54.890000 audit: BPF prog-id=42 op=UNLOAD Jun 25 16:22:54.890000 audit: BPF prog-id=43 op=UNLOAD Jun 25 16:22:54.891000 audit: BPF prog-id=53 op=LOAD Jun 25 16:22:54.891000 audit: BPF prog-id=54 op=LOAD Jun 25 16:22:54.891000 audit: BPF prog-id=34 op=UNLOAD Jun 25 16:22:54.891000 audit: BPF prog-id=35 op=UNLOAD Jun 25 16:22:54.891000 audit: BPF prog-id=55 op=LOAD Jun 25 16:22:54.891000 audit: BPF prog-id=36 op=UNLOAD Jun 25 16:22:54.892000 audit: BPF prog-id=56 op=LOAD Jun 25 16:22:54.892000 audit: BPF prog-id=57 op=LOAD Jun 25 16:22:54.892000 audit: BPF prog-id=37 op=UNLOAD Jun 25 16:22:54.892000 audit: BPF prog-id=38 op=UNLOAD Jun 25 16:22:54.909311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:54.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:54.912972 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:54.913430 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:22:54.913622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:54.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:54.915524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:22:55.024788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:22:55.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:22:55.068093 kubelet[1945]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:22:55.068093 kubelet[1945]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:22:55.068093 kubelet[1945]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:22:55.068852 kubelet[1945]: I0625 16:22:55.068812 1945 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:22:55.492981 kubelet[1945]: I0625 16:22:55.492922 1945 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 16:22:55.492981 kubelet[1945]: I0625 16:22:55.492967 1945 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:22:55.493275 kubelet[1945]: I0625 16:22:55.493248 1945 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 16:22:55.510742 kubelet[1945]: I0625 16:22:55.510700 1945 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:22:55.511232 kubelet[1945]: E0625 16:22:55.511205 1945 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:55.523572 kubelet[1945]: I0625 16:22:55.523501 1945 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:22:55.525227 kubelet[1945]: I0625 16:22:55.525177 1945 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:22:55.525402 kubelet[1945]: I0625 16:22:55.525227 1945 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:22:55.525774 kubelet[1945]: I0625 16:22:55.525756 1945 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:22:55.525774 kubelet[1945]: I0625 16:22:55.525771 1945 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:22:55.525899 kubelet[1945]: I0625 16:22:55.525885 1945 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:22:55.526448 kubelet[1945]: I0625 16:22:55.526433 1945 kubelet.go:400] "Attempting to sync node with API server" Jun 25 16:22:55.526484 kubelet[1945]: I0625 16:22:55.526448 1945 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:22:55.526484 kubelet[1945]: I0625 16:22:55.526467 1945 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:22:55.526484 kubelet[1945]: I0625 16:22:55.526480 1945 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:22:55.529013 kubelet[1945]: W0625 16:22:55.528971 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:55.529139 kubelet[1945]: E0625 16:22:55.529125 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:55.530230 kubelet[1945]: W0625 16:22:55.530197 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:55.530288 kubelet[1945]: E0625 16:22:55.530238 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:55.531521 kubelet[1945]: I0625 16:22:55.531479 1945 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:22:55.532720 kubelet[1945]: I0625 16:22:55.532693 1945 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:22:55.532782 kubelet[1945]: W0625 16:22:55.532757 1945 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:22:55.533425 kubelet[1945]: I0625 16:22:55.533408 1945 server.go:1264] "Started kubelet" Jun 25 16:22:55.533549 kubelet[1945]: I0625 16:22:55.533493 1945 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:22:55.534651 kubelet[1945]: I0625 16:22:55.534629 1945 server.go:455] "Adding debug handlers to kubelet server" Jun 25 16:22:55.536738 kubelet[1945]: I0625 16:22:55.536007 1945 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:22:55.536738 kubelet[1945]: I0625 16:22:55.536396 1945 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:22:55.536967 kubelet[1945]: I0625 16:22:55.536946 1945 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:22:55.537752 kubelet[1945]: E0625 16:22:55.537731 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:22:55.537812 kubelet[1945]: I0625 16:22:55.537804 1945 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:22:55.538495 kubelet[1945]: W0625 16:22:55.538450 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:55.538552 kubelet[1945]: E0625 16:22:55.538523 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:55.538613 kubelet[1945]: E0625 16:22:55.538595 1945 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:22:55.539397 kubelet[1945]: I0625 16:22:55.538740 1945 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 16:22:55.539397 kubelet[1945]: E0625 16:22:55.539052 1945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="200ms" Jun 25 16:22:55.539397 kubelet[1945]: I0625 16:22:55.539126 1945 reconciler.go:26] "Reconciler: start to sync state" Jun 25 16:22:55.539619 kubelet[1945]: I0625 16:22:55.539603 1945 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:22:55.539842 kubelet[1945]: E0625 16:22:55.539693 1945 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.90:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.90:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17dc4bde376cbab1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 16:22:55.533382321 +0000 UTC m=+0.504362580,LastTimestamp:2024-06-25 16:22:55.533382321 +0000 UTC m=+0.504362580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 16:22:55.538000 audit[1957]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:55.538000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffec6577d00 a2=0 a3=7f37ea404e90 items=0 ppid=1945 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.538000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:22:55.540600 kubelet[1945]: I0625 16:22:55.540583 1945 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:22:55.540683 kubelet[1945]: I0625 16:22:55.540607 1945 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:22:55.539000 audit[1958]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:55.539000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6fd723e0 a2=0 a3=7f61d1178e90 items=0 ppid=1945 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.539000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:22:55.541000 audit[1960]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:55.541000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd60f1cc80 a2=0 a3=7f362abe6e90 items=0 ppid=1945 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:22:55.543000 audit[1962]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:55.543000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcf2e35350 a2=0 a3=7f91a976de90 items=0 ppid=1945 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.543000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:22:55.547000 audit[1965]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:55.547000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc5fb18dc0 a2=0 a3=7f2d74f7de90 items=0 ppid=1945 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:22:55.549362 kubelet[1945]: I0625 16:22:55.549327 1945 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:22:55.548000 audit[1967]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:55.548000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc84481e70 a2=0 a3=7f3e78fb1e90 items=0 ppid=1945 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:22:55.550336 kubelet[1945]: I0625 16:22:55.550323 1945 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:22:55.550408 kubelet[1945]: I0625 16:22:55.550401 1945 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:22:55.550459 kubelet[1945]: I0625 16:22:55.550453 1945 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 16:22:55.550532 kubelet[1945]: E0625 16:22:55.550518 1945 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:22:55.551000 audit[1970]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:55.551000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffd2ca490 a2=0 a3=7f9dd6f77e90 items=0 ppid=1945 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.551000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:22:55.551000 audit[1971]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:55.551000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb1eb6e30 a2=0 a3=7f5448467e90 items=0 ppid=1945 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.551000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:22:55.552000 audit[1972]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:55.552000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffefa83de80 a2=0 a3=7ff7eefcee90 items=0 ppid=1945 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.552000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:22:55.552000 audit[1973]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:22:55.552000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff138b8c50 a2=0 a3=7f82df953e90 items=0 ppid=1945 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:22:55.552000 audit[1974]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:55.552000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffffd10a490 a2=0 a3=4 items=0 ppid=1945 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.552000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:22:55.554837 kubelet[1945]: W0625 16:22:55.554777 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:55.554883 kubelet[1945]: E0625 16:22:55.554842 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:55.553000 audit[1975]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:22:55.553000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffa55157b0 a2=0 a3=7fbb55ba1e90 items=0 ppid=1945 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:22:55.553000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:22:55.555650 kubelet[1945]: I0625 16:22:55.555624 1945 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:22:55.555650 kubelet[1945]: I0625 16:22:55.555641 1945 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:22:55.555728 kubelet[1945]: I0625 16:22:55.555681 1945 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:22:55.639910 kubelet[1945]: I0625 16:22:55.639867 1945 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:22:55.640353 kubelet[1945]: E0625 16:22:55.640315 1945 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Jun 25 16:22:55.650636 kubelet[1945]: E0625 16:22:55.650608 1945 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:22:55.740606 kubelet[1945]: E0625 16:22:55.740537 1945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="400ms" Jun 25 16:22:55.842225 kubelet[1945]: I0625 16:22:55.842124 1945 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:22:55.842558 kubelet[1945]: E0625 16:22:55.842529 1945 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Jun 25 16:22:55.851630 kubelet[1945]: E0625 16:22:55.851606 1945 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:22:56.141334 kubelet[1945]: E0625 16:22:56.141271 1945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="800ms" Jun 25 16:22:56.243985 kubelet[1945]: I0625 16:22:56.243948 1945 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:22:56.244253 kubelet[1945]: E0625 16:22:56.244229 1945 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Jun 25 16:22:56.252416 kubelet[1945]: E0625 16:22:56.252381 1945 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:22:56.532147 kubelet[1945]: W0625 16:22:56.531959 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:56.532147 kubelet[1945]: E0625 16:22:56.532047 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:56.650278 kubelet[1945]: W0625 16:22:56.650203 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:56.650278 kubelet[1945]: E0625 16:22:56.650267 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:56.680021 kubelet[1945]: W0625 16:22:56.679892 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:56.680021 kubelet[1945]: E0625 16:22:56.679973 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:56.924983 kubelet[1945]: W0625 16:22:56.924877 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:56.924983 kubelet[1945]: E0625 16:22:56.924951 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:56.942437 kubelet[1945]: E0625 16:22:56.942387 1945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="1.6s" Jun 25 16:22:57.047001 kubelet[1945]: I0625 16:22:57.046338 1945 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:22:57.047231 kubelet[1945]: E0625 16:22:57.047125 1945 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Jun 25 16:22:57.053375 kubelet[1945]: E0625 16:22:57.053327 1945 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:22:57.132380 kubelet[1945]: I0625 16:22:57.132251 1945 policy_none.go:49] "None policy: Start" Jun 25 16:22:57.133197 kubelet[1945]: I0625 16:22:57.133161 1945 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:22:57.133197 kubelet[1945]: I0625 16:22:57.133206 1945 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:22:57.208251 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:22:57.229417 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:22:57.232032 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:22:57.241008 kubelet[1945]: I0625 16:22:57.240965 1945 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:22:57.241370 kubelet[1945]: I0625 16:22:57.241196 1945 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 16:22:57.241370 kubelet[1945]: I0625 16:22:57.241293 1945 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:22:57.243374 kubelet[1945]: E0625 16:22:57.243336 1945 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 16:22:57.701772 kubelet[1945]: E0625 16:22:57.701711 1945 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:58.543539 kubelet[1945]: E0625 16:22:58.543460 1945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="3.2s" Jun 25 16:22:58.649217 kubelet[1945]: I0625 16:22:58.649166 1945 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:22:58.649555 kubelet[1945]: E0625 16:22:58.649529 1945 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Jun 25 16:22:58.653726 kubelet[1945]: I0625 16:22:58.653673 1945 topology_manager.go:215] "Topology Admit Handler" podUID="0a1a74c0ad5f3d3f4a55cce33be075b1" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:22:58.654470 kubelet[1945]: I0625 16:22:58.654436 1945 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:22:58.655273 kubelet[1945]: I0625 16:22:58.655253 1945 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:22:58.659874 systemd[1]: Created slice kubepods-burstable-pod0a1a74c0ad5f3d3f4a55cce33be075b1.slice - libcontainer container kubepods-burstable-pod0a1a74c0ad5f3d3f4a55cce33be075b1.slice. Jun 25 16:22:58.672996 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jun 25 16:22:58.685297 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jun 25 16:22:58.758064 kubelet[1945]: I0625 16:22:58.758017 1945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a1a74c0ad5f3d3f4a55cce33be075b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a1a74c0ad5f3d3f4a55cce33be075b1\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:22:58.758196 kubelet[1945]: I0625 16:22:58.758065 1945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:58.758196 kubelet[1945]: I0625 16:22:58.758113 1945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:58.758196 kubelet[1945]: I0625 16:22:58.758160 1945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:58.758196 kubelet[1945]: I0625 16:22:58.758178 1945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:22:58.758287 kubelet[1945]: I0625 16:22:58.758205 1945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a1a74c0ad5f3d3f4a55cce33be075b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a1a74c0ad5f3d3f4a55cce33be075b1\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:22:58.758287 kubelet[1945]: I0625 16:22:58.758225 1945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a1a74c0ad5f3d3f4a55cce33be075b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a1a74c0ad5f3d3f4a55cce33be075b1\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:22:58.758287 kubelet[1945]: I0625 16:22:58.758246 1945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:58.758287 kubelet[1945]: I0625 16:22:58.758264 1945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:22:58.842860 kubelet[1945]: W0625 16:22:58.842780 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:58.842860 kubelet[1945]: E0625 16:22:58.842821 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:58.971234 kubelet[1945]: E0625 16:22:58.971183 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:22:58.971838 containerd[1286]: time="2024-06-25T16:22:58.971783058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a1a74c0ad5f3d3f4a55cce33be075b1,Namespace:kube-system,Attempt:0,}" Jun 25 16:22:58.984088 kubelet[1945]: E0625 16:22:58.984049 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:22:58.984601 containerd[1286]: time="2024-06-25T16:22:58.984547829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jun 25 16:22:58.987776 kubelet[1945]: E0625 16:22:58.987748 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:22:58.988136 containerd[1286]: time="2024-06-25T16:22:58.988095025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jun 25 16:22:59.280998 kubelet[1945]: W0625 16:22:59.280919 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:59.280998 kubelet[1945]: E0625 16:22:59.280988 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:59.424045 kubelet[1945]: W0625 16:22:59.424010 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:59.424045 kubelet[1945]: E0625 16:22:59.424048 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:59.696844 kubelet[1945]: W0625 16:22:59.696806 1945 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:22:59.696844 kubelet[1945]: E0625 16:22:59.696847 1945 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Jun 25 16:23:00.395897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712281779.mount: Deactivated successfully. Jun 25 16:23:00.403675 containerd[1286]: time="2024-06-25T16:23:00.403619200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.404599 containerd[1286]: time="2024-06-25T16:23:00.404564788Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.405795 containerd[1286]: time="2024-06-25T16:23:00.405599090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:23:00.406663 containerd[1286]: time="2024-06-25T16:23:00.406633141Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.407744 containerd[1286]: time="2024-06-25T16:23:00.407674738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:23:00.408623 containerd[1286]: time="2024-06-25T16:23:00.408569946Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.409582 containerd[1286]: time="2024-06-25T16:23:00.409544741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:23:00.410771 containerd[1286]: time="2024-06-25T16:23:00.410673738Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.412119 containerd[1286]: time="2024-06-25T16:23:00.412038279Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.413507 containerd[1286]: time="2024-06-25T16:23:00.413436927Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.415232 containerd[1286]: time="2024-06-25T16:23:00.415187015Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.417297 containerd[1286]: time="2024-06-25T16:23:00.417249666Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.432584625s" Jun 25 16:23:00.418246 containerd[1286]: time="2024-06-25T16:23:00.418190134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.419572 containerd[1286]: time="2024-06-25T16:23:00.419528594Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.431342901s" Jun 25 16:23:00.420188 containerd[1286]: time="2024-06-25T16:23:00.420127420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.421369 containerd[1286]: time="2024-06-25T16:23:00.421306357Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.449418933s" Jun 25 16:23:00.421966 containerd[1286]: time="2024-06-25T16:23:00.421934951Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.422858 containerd[1286]: time="2024-06-25T16:23:00.422829168Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:23:00.514190 containerd[1286]: time="2024-06-25T16:23:00.513998008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:00.514190 containerd[1286]: time="2024-06-25T16:23:00.514044739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:00.514461 containerd[1286]: time="2024-06-25T16:23:00.514245924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:00.514461 containerd[1286]: time="2024-06-25T16:23:00.514280763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:00.514461 containerd[1286]: time="2024-06-25T16:23:00.514297296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:00.514461 containerd[1286]: time="2024-06-25T16:23:00.514308297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:00.514461 containerd[1286]: time="2024-06-25T16:23:00.514181469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:00.514461 containerd[1286]: time="2024-06-25T16:23:00.514243259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:00.514461 containerd[1286]: time="2024-06-25T16:23:00.514264912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:00.514461 containerd[1286]: time="2024-06-25T16:23:00.514281766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:00.515667 containerd[1286]: time="2024-06-25T16:23:00.515552221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:00.515667 containerd[1286]: time="2024-06-25T16:23:00.515574765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:00.530256 systemd[1]: Started cri-containerd-e638ce46f30247841d7eb5265f705bc6761e005b4858132ddf9df74cb52be85a.scope - libcontainer container e638ce46f30247841d7eb5265f705bc6761e005b4858132ddf9df74cb52be85a. Jun 25 16:23:00.533752 systemd[1]: Started cri-containerd-97db5e81aa3c527993ab5f5ff59dd22e4c91e23220b5b956e9a9b5ec5b6763d0.scope - libcontainer container 97db5e81aa3c527993ab5f5ff59dd22e4c91e23220b5b956e9a9b5ec5b6763d0. Jun 25 16:23:00.534875 systemd[1]: Started cri-containerd-d47455dfd0d2e70c74706fac87effd1be927e0827612f02c61dd44c3b1cc5c7e.scope - libcontainer container d47455dfd0d2e70c74706fac87effd1be927e0827612f02c61dd44c3b1cc5c7e. Jun 25 16:23:00.544021 kernel: kauditd_printk_skb: 59 callbacks suppressed Jun 25 16:23:00.544147 kernel: audit: type=1334 audit(1719332580.539:291): prog-id=58 op=LOAD Jun 25 16:23:00.544165 kernel: audit: type=1334 audit(1719332580.539:292): prog-id=59 op=LOAD Jun 25 16:23:00.544176 kernel: audit: type=1300 audit(1719332580.539:292): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2010 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.539000 audit: BPF prog-id=58 op=LOAD Jun 25 16:23:00.539000 audit: BPF prog-id=59 op=LOAD Jun 25 16:23:00.539000 audit[2039]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2010 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.548268 kernel: audit: type=1327 audit(1719332580.539:292): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333863653436663330323437383431643765623532363566373035 Jun 25 16:23:00.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333863653436663330323437383431643765623532363566373035 Jun 25 16:23:00.550122 kernel: audit: type=1334 audit(1719332580.539:293): prog-id=60 op=LOAD Jun 25 16:23:00.550164 kernel: audit: type=1300 audit(1719332580.539:293): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2010 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.539000 audit: BPF prog-id=60 op=LOAD Jun 25 16:23:00.539000 audit[2039]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2010 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.556144 kernel: audit: type=1327 audit(1719332580.539:293): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333863653436663330323437383431643765623532363566373035 Jun 25 16:23:00.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333863653436663330323437383431643765623532363566373035 Jun 25 16:23:00.539000 audit: BPF prog-id=60 op=UNLOAD Jun 25 16:23:00.557107 kernel: audit: type=1334 audit(1719332580.539:294): prog-id=60 op=UNLOAD Jun 25 16:23:00.539000 audit: BPF prog-id=59 op=UNLOAD Jun 25 16:23:00.558742 kernel: audit: type=1334 audit(1719332580.539:295): prog-id=59 op=UNLOAD Jun 25 16:23:00.558769 kernel: audit: type=1334 audit(1719332580.539:296): prog-id=61 op=LOAD Jun 25 16:23:00.539000 audit: BPF prog-id=61 op=LOAD Jun 25 16:23:00.539000 audit[2039]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2010 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333863653436663330323437383431643765623532363566373035 Jun 25 16:23:00.543000 audit: BPF prog-id=62 op=LOAD Jun 25 16:23:00.543000 audit: BPF prog-id=63 op=LOAD Jun 25 16:23:00.543000 audit[2041]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2008 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646235653831616133633532373939336162356635666635396464 Jun 25 16:23:00.544000 audit: BPF prog-id=64 op=LOAD Jun 25 16:23:00.544000 audit[2041]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2008 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646235653831616133633532373939336162356635666635396464 Jun 25 16:23:00.544000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:23:00.544000 audit: BPF prog-id=63 op=UNLOAD Jun 25 16:23:00.544000 audit: BPF prog-id=65 op=LOAD Jun 25 16:23:00.544000 audit[2041]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2008 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646235653831616133633532373939336162356635666635396464 Jun 25 16:23:00.548000 audit: BPF prog-id=66 op=LOAD Jun 25 16:23:00.548000 audit: BPF prog-id=67 op=LOAD Jun 25 16:23:00.548000 audit[2044]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2009 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434373435356466643064326537306337343730366661633837656666 Jun 25 16:23:00.548000 audit: BPF prog-id=68 op=LOAD Jun 25 16:23:00.548000 audit[2044]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2009 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434373435356466643064326537306337343730366661633837656666 Jun 25 16:23:00.548000 audit: BPF prog-id=68 op=UNLOAD Jun 25 16:23:00.548000 audit: BPF prog-id=67 op=UNLOAD Jun 25 16:23:00.548000 audit: BPF prog-id=69 op=LOAD Jun 25 16:23:00.548000 audit[2044]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2009 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434373435356466643064326537306337343730366661633837656666 Jun 25 16:23:00.582838 containerd[1286]: time="2024-06-25T16:23:00.582781652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"e638ce46f30247841d7eb5265f705bc6761e005b4858132ddf9df74cb52be85a\"" Jun 25 16:23:00.584603 kubelet[1945]: E0625 16:23:00.584259 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:00.585387 containerd[1286]: time="2024-06-25T16:23:00.584782013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a1a74c0ad5f3d3f4a55cce33be075b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d47455dfd0d2e70c74706fac87effd1be927e0827612f02c61dd44c3b1cc5c7e\"" Jun 25 16:23:00.586968 kubelet[1945]: E0625 16:23:00.585841 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:00.587997 containerd[1286]: time="2024-06-25T16:23:00.587969634Z" level=info msg="CreateContainer within sandbox \"d47455dfd0d2e70c74706fac87effd1be927e0827612f02c61dd44c3b1cc5c7e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:23:00.588114 containerd[1286]: time="2024-06-25T16:23:00.587968031Z" level=info msg="CreateContainer within sandbox \"e638ce46f30247841d7eb5265f705bc6761e005b4858132ddf9df74cb52be85a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:23:00.592529 containerd[1286]: time="2024-06-25T16:23:00.592480727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"97db5e81aa3c527993ab5f5ff59dd22e4c91e23220b5b956e9a9b5ec5b6763d0\"" Jun 25 16:23:00.593151 kubelet[1945]: E0625 16:23:00.593132 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:00.594871 containerd[1286]: time="2024-06-25T16:23:00.594835181Z" level=info msg="CreateContainer within sandbox \"97db5e81aa3c527993ab5f5ff59dd22e4c91e23220b5b956e9a9b5ec5b6763d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:23:00.614375 containerd[1286]: time="2024-06-25T16:23:00.614300742Z" level=info msg="CreateContainer within sandbox \"e638ce46f30247841d7eb5265f705bc6761e005b4858132ddf9df74cb52be85a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4c590ac736e88ed03f267c368d7d050c6b8c99acca0b201b7f1730c87a9ab748\"" Jun 25 16:23:00.615168 containerd[1286]: time="2024-06-25T16:23:00.615138778Z" level=info msg="StartContainer for \"4c590ac736e88ed03f267c368d7d050c6b8c99acca0b201b7f1730c87a9ab748\"" Jun 25 16:23:00.619851 containerd[1286]: time="2024-06-25T16:23:00.619806948Z" level=info msg="CreateContainer within sandbox \"d47455dfd0d2e70c74706fac87effd1be927e0827612f02c61dd44c3b1cc5c7e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b21b06555992c10d49ee11deed4724006e5d79f5ecd13964b7c562c495af0fe8\"" Jun 25 16:23:00.620208 containerd[1286]: time="2024-06-25T16:23:00.620177056Z" level=info msg="StartContainer for \"b21b06555992c10d49ee11deed4724006e5d79f5ecd13964b7c562c495af0fe8\"" Jun 25 16:23:00.623876 containerd[1286]: time="2024-06-25T16:23:00.623850000Z" level=info msg="CreateContainer within sandbox \"97db5e81aa3c527993ab5f5ff59dd22e4c91e23220b5b956e9a9b5ec5b6763d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b3dc143db31e37527a80106e60b9e527a448d9cdfebfc657cf59e151967a0265\"" Jun 25 16:23:00.624400 containerd[1286]: time="2024-06-25T16:23:00.624363349Z" level=info msg="StartContainer for \"b3dc143db31e37527a80106e60b9e527a448d9cdfebfc657cf59e151967a0265\"" Jun 25 16:23:00.640258 systemd[1]: Started cri-containerd-4c590ac736e88ed03f267c368d7d050c6b8c99acca0b201b7f1730c87a9ab748.scope - libcontainer container 4c590ac736e88ed03f267c368d7d050c6b8c99acca0b201b7f1730c87a9ab748. Jun 25 16:23:00.642096 systemd[1]: Started cri-containerd-b21b06555992c10d49ee11deed4724006e5d79f5ecd13964b7c562c495af0fe8.scope - libcontainer container b21b06555992c10d49ee11deed4724006e5d79f5ecd13964b7c562c495af0fe8. Jun 25 16:23:00.645811 systemd[1]: Started cri-containerd-b3dc143db31e37527a80106e60b9e527a448d9cdfebfc657cf59e151967a0265.scope - libcontainer container b3dc143db31e37527a80106e60b9e527a448d9cdfebfc657cf59e151967a0265. Jun 25 16:23:00.650000 audit: BPF prog-id=70 op=LOAD Jun 25 16:23:00.650000 audit: BPF prog-id=71 op=LOAD Jun 25 16:23:00.650000 audit[2124]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2010 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463353930616337333665383865643033663236376333363864376430 Jun 25 16:23:00.650000 audit: BPF prog-id=72 op=LOAD Jun 25 16:23:00.650000 audit[2124]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2010 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463353930616337333665383865643033663236376333363864376430 Jun 25 16:23:00.650000 audit: BPF prog-id=72 op=UNLOAD Jun 25 16:23:00.650000 audit: BPF prog-id=71 op=UNLOAD Jun 25 16:23:00.650000 audit: BPF prog-id=73 op=LOAD Jun 25 16:23:00.650000 audit[2124]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2010 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463353930616337333665383865643033663236376333363864376430 Jun 25 16:23:00.654000 audit: BPF prog-id=74 op=LOAD Jun 25 16:23:00.655000 audit: BPF prog-id=75 op=LOAD Jun 25 16:23:00.655000 audit[2141]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2009 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232316230363535353939326331306434396565313164656564343732 Jun 25 16:23:00.655000 audit: BPF prog-id=76 op=LOAD Jun 25 16:23:00.655000 audit[2141]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2009 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232316230363535353939326331306434396565313164656564343732 Jun 25 16:23:00.655000 audit: BPF prog-id=76 op=UNLOAD Jun 25 16:23:00.655000 audit: BPF prog-id=75 op=UNLOAD Jun 25 16:23:00.655000 audit: BPF prog-id=77 op=LOAD Jun 25 16:23:00.655000 audit[2141]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2009 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232316230363535353939326331306434396565313164656564343732 Jun 25 16:23:00.657000 audit: BPF prog-id=78 op=LOAD Jun 25 16:23:00.658000 audit: BPF prog-id=79 op=LOAD Jun 25 16:23:00.658000 audit[2153]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2008 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233646331343364623331653337353237613830313036653630623965 Jun 25 16:23:00.658000 audit: BPF prog-id=80 op=LOAD Jun 25 16:23:00.658000 audit[2153]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2008 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233646331343364623331653337353237613830313036653630623965 Jun 25 16:23:00.658000 audit: BPF prog-id=80 op=UNLOAD Jun 25 16:23:00.658000 audit: BPF prog-id=79 op=UNLOAD Jun 25 16:23:00.658000 audit: BPF prog-id=81 op=LOAD Jun 25 16:23:00.658000 audit[2153]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2008 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:00.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233646331343364623331653337353237613830313036653630623965 Jun 25 16:23:00.681096 containerd[1286]: time="2024-06-25T16:23:00.681018327Z" level=info msg="StartContainer for \"4c590ac736e88ed03f267c368d7d050c6b8c99acca0b201b7f1730c87a9ab748\" returns successfully" Jun 25 16:23:00.685876 containerd[1286]: time="2024-06-25T16:23:00.685825521Z" level=info msg="StartContainer for \"b21b06555992c10d49ee11deed4724006e5d79f5ecd13964b7c562c495af0fe8\" returns successfully" Jun 25 16:23:00.693410 containerd[1286]: time="2024-06-25T16:23:00.693349512Z" level=info msg="StartContainer for \"b3dc143db31e37527a80106e60b9e527a448d9cdfebfc657cf59e151967a0265\" returns successfully" Jun 25 16:23:01.523000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=520970 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:01.523000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000a29560 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:23:01.523000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:01.524000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:01.524000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c00132a860 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:23:01.524000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:01.567852 kubelet[1945]: E0625 16:23:01.567773 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:01.569663 kubelet[1945]: E0625 16:23:01.569620 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:01.570931 kubelet[1945]: E0625 16:23:01.570920 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:01.583000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=520970 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:01.583000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c006b25890 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:23:01.583000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:23:01.583000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=520966 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:01.583000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c005660ba0 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:23:01.583000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:23:01.584000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:01.584000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=48 a1=c00411b120 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:23:01.584000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:23:01.584000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:01.584000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=48 a1=c004b70200 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:23:01.584000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:23:01.584000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=520970 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:01.584000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=48 a1=c005660e10 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:23:01.584000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:23:01.584000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=520972 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:01.584000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=47 a1=c006bde060 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:23:01.584000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:23:01.746444 kubelet[1945]: E0625 16:23:01.746403 1945 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 16:23:01.851453 kubelet[1945]: I0625 16:23:01.851339 1945 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:23:01.858601 kubelet[1945]: I0625 16:23:01.858555 1945 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 16:23:01.865745 kubelet[1945]: E0625 16:23:01.865714 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:01.966102 kubelet[1945]: E0625 16:23:01.966024 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:02.066473 kubelet[1945]: E0625 16:23:02.066427 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:02.167209 kubelet[1945]: E0625 16:23:02.167132 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:02.267780 kubelet[1945]: E0625 16:23:02.267730 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:02.368405 kubelet[1945]: E0625 16:23:02.368344 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:02.469223 kubelet[1945]: E0625 16:23:02.469037 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:02.570224 kubelet[1945]: E0625 16:23:02.570160 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:02.573427 kubelet[1945]: E0625 16:23:02.573392 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:02.670716 kubelet[1945]: E0625 16:23:02.670675 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:02.771476 kubelet[1945]: E0625 16:23:02.771356 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:02.872045 kubelet[1945]: E0625 16:23:02.871995 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:02.972906 kubelet[1945]: E0625 16:23:02.972861 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:03.073933 kubelet[1945]: E0625 16:23:03.073801 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:03.174533 kubelet[1945]: E0625 16:23:03.174471 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:03.274688 kubelet[1945]: E0625 16:23:03.274626 1945 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:23:03.412238 kubelet[1945]: E0625 16:23:03.412180 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:03.458859 systemd[1]: Reloading. Jun 25 16:23:03.531676 kubelet[1945]: I0625 16:23:03.531625 1945 apiserver.go:52] "Watching apiserver" Jun 25 16:23:03.538950 kubelet[1945]: I0625 16:23:03.538895 1945 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 16:23:03.573903 kubelet[1945]: E0625 16:23:03.573878 1945 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:03.624344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:23:03.699000 audit: BPF prog-id=82 op=LOAD Jun 25 16:23:03.699000 audit: BPF prog-id=44 op=UNLOAD Jun 25 16:23:03.700000 audit: BPF prog-id=83 op=LOAD Jun 25 16:23:03.700000 audit: BPF prog-id=70 op=UNLOAD Jun 25 16:23:03.703000 audit: BPF prog-id=84 op=LOAD Jun 25 16:23:03.703000 audit: BPF prog-id=78 op=UNLOAD Jun 25 16:23:03.703000 audit: BPF prog-id=85 op=LOAD Jun 25 16:23:03.703000 audit: BPF prog-id=66 op=UNLOAD Jun 25 16:23:03.703000 audit: BPF prog-id=86 op=LOAD Jun 25 16:23:03.704000 audit: BPF prog-id=45 op=UNLOAD Jun 25 16:23:03.704000 audit: BPF prog-id=87 op=LOAD Jun 25 16:23:03.704000 audit: BPF prog-id=46 op=UNLOAD Jun 25 16:23:03.705000 audit: BPF prog-id=88 op=LOAD Jun 25 16:23:03.705000 audit: BPF prog-id=47 op=UNLOAD Jun 25 16:23:03.705000 audit: BPF prog-id=89 op=LOAD Jun 25 16:23:03.705000 audit: BPF prog-id=90 op=LOAD Jun 25 16:23:03.705000 audit: BPF prog-id=48 op=UNLOAD Jun 25 16:23:03.705000 audit: BPF prog-id=49 op=UNLOAD Jun 25 16:23:03.707000 audit: BPF prog-id=91 op=LOAD Jun 25 16:23:03.707000 audit: BPF prog-id=50 op=UNLOAD Jun 25 16:23:03.707000 audit: BPF prog-id=92 op=LOAD Jun 25 16:23:03.707000 audit: BPF prog-id=93 op=LOAD Jun 25 16:23:03.707000 audit: BPF prog-id=51 op=UNLOAD Jun 25 16:23:03.707000 audit: BPF prog-id=52 op=UNLOAD Jun 25 16:23:03.708000 audit: BPF prog-id=94 op=LOAD Jun 25 16:23:03.708000 audit: BPF prog-id=74 op=UNLOAD Jun 25 16:23:03.708000 audit: BPF prog-id=95 op=LOAD Jun 25 16:23:03.708000 audit: BPF prog-id=58 op=UNLOAD Jun 25 16:23:03.709000 audit: BPF prog-id=96 op=LOAD Jun 25 16:23:03.709000 audit: BPF prog-id=62 op=UNLOAD Jun 25 16:23:03.709000 audit: BPF prog-id=97 op=LOAD Jun 25 16:23:03.709000 audit: BPF prog-id=98 op=LOAD Jun 25 16:23:03.709000 audit: BPF prog-id=53 op=UNLOAD Jun 25 16:23:03.709000 audit: BPF prog-id=54 op=UNLOAD Jun 25 16:23:03.710000 audit: BPF prog-id=99 op=LOAD Jun 25 16:23:03.710000 audit: BPF prog-id=55 op=UNLOAD Jun 25 16:23:03.710000 audit: BPF prog-id=100 op=LOAD Jun 25 16:23:03.710000 audit: BPF prog-id=101 op=LOAD Jun 25 16:23:03.710000 audit: BPF prog-id=56 op=UNLOAD Jun 25 16:23:03.710000 audit: BPF prog-id=57 op=UNLOAD Jun 25 16:23:03.721511 kubelet[1945]: I0625 16:23:03.721333 1945 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:23:03.721457 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:03.738396 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:23:03.738605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:03.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:03.747518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:03.838023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:03.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:03.885929 kubelet[2299]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:23:03.885929 kubelet[2299]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:23:03.885929 kubelet[2299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:23:03.886364 kubelet[2299]: I0625 16:23:03.885989 2299 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:23:03.890672 kubelet[2299]: I0625 16:23:03.890636 2299 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 16:23:03.890672 kubelet[2299]: I0625 16:23:03.890666 2299 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:23:03.890928 kubelet[2299]: I0625 16:23:03.890906 2299 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 16:23:03.892130 kubelet[2299]: I0625 16:23:03.892111 2299 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:23:03.893159 kubelet[2299]: I0625 16:23:03.893114 2299 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:23:03.899003 kubelet[2299]: I0625 16:23:03.898983 2299 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:23:03.899317 kubelet[2299]: I0625 16:23:03.899278 2299 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:23:03.901286 kubelet[2299]: I0625 16:23:03.899326 2299 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:23:03.901286 kubelet[2299]: I0625 16:23:03.901289 2299 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:23:03.901286 kubelet[2299]: I0625 16:23:03.901302 2299 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:23:03.901528 kubelet[2299]: I0625 16:23:03.901335 2299 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:23:03.901528 kubelet[2299]: I0625 16:23:03.901426 2299 kubelet.go:400] "Attempting to sync node with API server" Jun 25 16:23:03.901528 kubelet[2299]: I0625 16:23:03.901437 2299 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:23:03.901528 kubelet[2299]: I0625 16:23:03.901454 2299 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:23:03.901528 kubelet[2299]: I0625 16:23:03.901465 2299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:23:03.906923 kubelet[2299]: I0625 16:23:03.902187 2299 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:23:03.906923 kubelet[2299]: I0625 16:23:03.902307 2299 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:23:03.906923 kubelet[2299]: I0625 16:23:03.902619 2299 server.go:1264] "Started kubelet" Jun 25 16:23:03.906923 kubelet[2299]: I0625 16:23:03.903714 2299 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:23:03.906923 kubelet[2299]: I0625 16:23:03.903730 2299 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:23:03.906923 kubelet[2299]: I0625 16:23:03.904607 2299 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:23:03.906923 kubelet[2299]: I0625 16:23:03.904643 2299 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:23:03.906923 kubelet[2299]: I0625 16:23:03.905543 2299 server.go:455] "Adding debug handlers to kubelet server" Jun 25 16:23:03.911212 kubelet[2299]: I0625 16:23:03.911199 2299 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:23:03.911410 kubelet[2299]: I0625 16:23:03.911400 2299 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 16:23:03.911597 kubelet[2299]: I0625 16:23:03.911589 2299 reconciler.go:26] "Reconciler: start to sync state" Jun 25 16:23:03.912795 kubelet[2299]: I0625 16:23:03.912783 2299 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:23:03.912931 kubelet[2299]: I0625 16:23:03.912918 2299 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:23:03.915149 kubelet[2299]: I0625 16:23:03.915018 2299 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:23:03.916018 kubelet[2299]: E0625 16:23:03.915980 2299 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:23:03.917395 kubelet[2299]: I0625 16:23:03.916781 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:23:03.918309 kubelet[2299]: I0625 16:23:03.917892 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:23:03.918363 kubelet[2299]: I0625 16:23:03.918312 2299 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:23:03.918363 kubelet[2299]: I0625 16:23:03.918333 2299 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 16:23:03.918405 kubelet[2299]: E0625 16:23:03.918377 2299 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:23:03.943419 kubelet[2299]: I0625 16:23:03.943392 2299 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:23:03.943419 kubelet[2299]: I0625 16:23:03.943409 2299 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:23:03.943419 kubelet[2299]: I0625 16:23:03.943427 2299 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:23:03.943616 kubelet[2299]: I0625 16:23:03.943550 2299 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:23:03.943616 kubelet[2299]: I0625 16:23:03.943559 2299 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:23:03.943616 kubelet[2299]: I0625 16:23:03.943576 2299 policy_none.go:49] "None policy: Start" Jun 25 16:23:03.944103 kubelet[2299]: I0625 16:23:03.944039 2299 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:23:03.944103 kubelet[2299]: I0625 16:23:03.944085 2299 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:23:03.944318 kubelet[2299]: I0625 16:23:03.944194 2299 state_mem.go:75] "Updated machine memory state" Jun 25 16:23:03.947551 kubelet[2299]: I0625 16:23:03.947522 2299 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:23:03.947701 kubelet[2299]: I0625 16:23:03.947662 2299 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 16:23:03.947761 kubelet[2299]: I0625 16:23:03.947749 2299 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:23:04.014778 kubelet[2299]: I0625 16:23:04.014626 2299 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:23:04.018668 kubelet[2299]: I0625 16:23:04.018613 2299 topology_manager.go:215] "Topology Admit Handler" podUID="0a1a74c0ad5f3d3f4a55cce33be075b1" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:23:04.018761 kubelet[2299]: I0625 16:23:04.018710 2299 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:23:04.018796 kubelet[2299]: I0625 16:23:04.018775 2299 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:23:04.021053 kubelet[2299]: I0625 16:23:04.021024 2299 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jun 25 16:23:04.021209 kubelet[2299]: I0625 16:23:04.021112 2299 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 16:23:04.022962 kubelet[2299]: E0625 16:23:04.022934 2299 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 25 16:23:04.113495 kubelet[2299]: I0625 16:23:04.113448 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:23:04.113495 kubelet[2299]: I0625 16:23:04.113493 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:23:04.113696 kubelet[2299]: I0625 16:23:04.113524 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:23:04.113696 kubelet[2299]: I0625 16:23:04.113551 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:23:04.113696 kubelet[2299]: I0625 16:23:04.113576 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:23:04.113696 kubelet[2299]: I0625 16:23:04.113593 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a1a74c0ad5f3d3f4a55cce33be075b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a1a74c0ad5f3d3f4a55cce33be075b1\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:23:04.113696 kubelet[2299]: I0625 16:23:04.113607 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a1a74c0ad5f3d3f4a55cce33be075b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a1a74c0ad5f3d3f4a55cce33be075b1\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:23:04.113798 kubelet[2299]: I0625 16:23:04.113636 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a1a74c0ad5f3d3f4a55cce33be075b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a1a74c0ad5f3d3f4a55cce33be075b1\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:23:04.113798 kubelet[2299]: I0625 16:23:04.113668 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:23:04.324011 kubelet[2299]: E0625 16:23:04.323892 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:04.324157 kubelet[2299]: E0625 16:23:04.324025 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:04.324157 kubelet[2299]: E0625 16:23:04.324043 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:04.903196 kubelet[2299]: I0625 16:23:04.903144 2299 apiserver.go:52] "Watching apiserver" Jun 25 16:23:04.912600 kubelet[2299]: I0625 16:23:04.912550 2299 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 16:23:04.929853 kubelet[2299]: E0625 16:23:04.929811 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:04.930453 kubelet[2299]: E0625 16:23:04.930427 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:04.933896 kubelet[2299]: E0625 16:23:04.933856 2299 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 16:23:04.934464 kubelet[2299]: E0625 16:23:04.934434 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:04.963744 kubelet[2299]: I0625 16:23:04.963644 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.963620822 podStartE2EDuration="1.963620822s" podCreationTimestamp="2024-06-25 16:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:23:04.953776043 +0000 UTC m=+1.111006375" watchObservedRunningTime="2024-06-25 16:23:04.963620822 +0000 UTC m=+1.120851144" Jun 25 16:23:04.973418 kubelet[2299]: I0625 16:23:04.973348 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.973314736 podStartE2EDuration="973.314736ms" podCreationTimestamp="2024-06-25 16:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:23:04.96433419 +0000 UTC m=+1.121564522" watchObservedRunningTime="2024-06-25 16:23:04.973314736 +0000 UTC m=+1.130545058" Jun 25 16:23:04.980864 kubelet[2299]: I0625 16:23:04.980797 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.980771625 podStartE2EDuration="980.771625ms" podCreationTimestamp="2024-06-25 16:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:23:04.973617195 +0000 UTC m=+1.130847547" watchObservedRunningTime="2024-06-25 16:23:04.980771625 +0000 UTC m=+1.138001957" Jun 25 16:23:05.931083 kubelet[2299]: E0625 16:23:05.931030 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:06.001169 kernel: kauditd_printk_skb: 128 callbacks suppressed Jun 25 16:23:06.001318 kernel: audit: type=1400 audit(1719332585.999:377): avc: denied { watch } for pid=2183 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=520997 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:23:05.999000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=520997 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:23:05.999000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000bfc700 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:23:06.008938 kernel: audit: type=1300 audit(1719332585.999:377): arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000bfc700 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:23:05.999000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:06.015088 kernel: audit: type=1327 audit(1719332585.999:377): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:06.067099 kernel: audit: type=1400 audit(1719332586.059:378): avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:06.067200 kernel: audit: type=1300 audit(1719332586.059:378): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000f12220 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:23:06.059000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:06.059000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000f12220 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:23:06.070458 kernel: audit: type=1327 audit(1719332586.059:378): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:06.059000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:06.059000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:06.074092 kernel: audit: type=1400 audit(1719332586.059:379): avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:06.059000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000c902e0 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:23:06.083968 kernel: audit: type=1300 audit(1719332586.059:379): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000c902e0 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:23:06.084001 kernel: audit: type=1327 audit(1719332586.059:379): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:06.059000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:06.086767 kernel: audit: type=1400 audit(1719332586.059:380): avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:06.059000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:06.059000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000f12260 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:23:06.059000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:06.060000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:23:06.060000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000f125a0 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:23:06.060000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:23:07.487493 kubelet[2299]: E0625 16:23:07.487438 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:07.893653 update_engine[1279]: I0625 16:23:07.893569 1279 update_attempter.cc:509] Updating boot flags... Jun 25 16:23:07.928134 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2374) Jun 25 16:23:07.956198 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2372) Jun 25 16:23:08.604685 sudo[1421]: pam_unix(sudo:session): session closed for user root Jun 25 16:23:08.603000 audit[1421]: USER_END pid=1421 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:08.603000 audit[1421]: CRED_DISP pid=1421 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:08.606314 sshd[1418]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:08.606000 audit[1418]: USER_END pid=1418 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:08.606000 audit[1418]: CRED_DISP pid=1418 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:08.608643 systemd[1]: sshd@6-10.0.0.90:22-10.0.0.1:60902.service: Deactivated successfully. Jun 25 16:23:08.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.90:22-10.0.0.1:60902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:08.609617 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:23:08.609800 systemd[1]: session-7.scope: Consumed 4.669s CPU time. Jun 25 16:23:08.610583 systemd-logind[1277]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:23:08.611414 systemd-logind[1277]: Removed session 7. Jun 25 16:23:08.747312 kubelet[2299]: E0625 16:23:08.747279 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:11.151406 kubelet[2299]: E0625 16:23:11.151369 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:11.945393 kubelet[2299]: E0625 16:23:11.945358 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:17.492421 kubelet[2299]: E0625 16:23:17.492391 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:18.753194 kubelet[2299]: E0625 16:23:18.753153 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:19.374296 kubelet[2299]: I0625 16:23:19.374257 2299 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:23:19.374711 containerd[1286]: time="2024-06-25T16:23:19.374666590Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:23:19.375037 kubelet[2299]: I0625 16:23:19.374854 2299 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:23:19.774469 kubelet[2299]: I0625 16:23:19.774351 2299 topology_manager.go:215] "Topology Admit Handler" podUID="e805e71c-3111-434a-96be-5f733e95a2be" podNamespace="kube-system" podName="kube-proxy-2qb2z" Jun 25 16:23:19.781178 systemd[1]: Created slice kubepods-besteffort-pode805e71c_3111_434a_96be_5f733e95a2be.slice - libcontainer container kubepods-besteffort-pode805e71c_3111_434a_96be_5f733e95a2be.slice. Jun 25 16:23:19.815479 kubelet[2299]: I0625 16:23:19.815412 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e805e71c-3111-434a-96be-5f733e95a2be-lib-modules\") pod \"kube-proxy-2qb2z\" (UID: \"e805e71c-3111-434a-96be-5f733e95a2be\") " pod="kube-system/kube-proxy-2qb2z" Jun 25 16:23:19.815479 kubelet[2299]: I0625 16:23:19.815478 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwv6\" (UniqueName: \"kubernetes.io/projected/e805e71c-3111-434a-96be-5f733e95a2be-kube-api-access-2jwv6\") pod \"kube-proxy-2qb2z\" (UID: \"e805e71c-3111-434a-96be-5f733e95a2be\") " pod="kube-system/kube-proxy-2qb2z" Jun 25 16:23:19.815724 kubelet[2299]: I0625 16:23:19.815542 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e805e71c-3111-434a-96be-5f733e95a2be-kube-proxy\") pod \"kube-proxy-2qb2z\" (UID: \"e805e71c-3111-434a-96be-5f733e95a2be\") " pod="kube-system/kube-proxy-2qb2z" Jun 25 16:23:19.815724 kubelet[2299]: I0625 16:23:19.815564 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e805e71c-3111-434a-96be-5f733e95a2be-xtables-lock\") pod \"kube-proxy-2qb2z\" (UID: \"e805e71c-3111-434a-96be-5f733e95a2be\") " pod="kube-system/kube-proxy-2qb2z" Jun 25 16:23:19.921823 kubelet[2299]: E0625 16:23:19.921779 2299 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 16:23:19.921823 kubelet[2299]: E0625 16:23:19.921812 2299 projected.go:200] Error preparing data for projected volume kube-api-access-2jwv6 for pod kube-system/kube-proxy-2qb2z: configmap "kube-root-ca.crt" not found Jun 25 16:23:19.921992 kubelet[2299]: E0625 16:23:19.921871 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e805e71c-3111-434a-96be-5f733e95a2be-kube-api-access-2jwv6 podName:e805e71c-3111-434a-96be-5f733e95a2be nodeName:}" failed. No retries permitted until 2024-06-25 16:23:20.421849781 +0000 UTC m=+16.579080113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2jwv6" (UniqueName: "kubernetes.io/projected/e805e71c-3111-434a-96be-5f733e95a2be-kube-api-access-2jwv6") pod "kube-proxy-2qb2z" (UID: "e805e71c-3111-434a-96be-5f733e95a2be") : configmap "kube-root-ca.crt" not found Jun 25 16:23:20.398759 kubelet[2299]: I0625 16:23:20.398714 2299 topology_manager.go:215] "Topology Admit Handler" podUID="faf1d568-2e81-47b7-8397-44ba081d8ad6" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-mmcl8" Jun 25 16:23:20.404158 systemd[1]: Created slice kubepods-besteffort-podfaf1d568_2e81_47b7_8397_44ba081d8ad6.slice - libcontainer container kubepods-besteffort-podfaf1d568_2e81_47b7_8397_44ba081d8ad6.slice. Jun 25 16:23:20.419146 kubelet[2299]: I0625 16:23:20.419107 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/faf1d568-2e81-47b7-8397-44ba081d8ad6-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-mmcl8\" (UID: \"faf1d568-2e81-47b7-8397-44ba081d8ad6\") " pod="tigera-operator/tigera-operator-76ff79f7fd-mmcl8" Jun 25 16:23:20.419433 kubelet[2299]: I0625 16:23:20.419415 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtgrw\" (UniqueName: \"kubernetes.io/projected/faf1d568-2e81-47b7-8397-44ba081d8ad6-kube-api-access-xtgrw\") pod \"tigera-operator-76ff79f7fd-mmcl8\" (UID: \"faf1d568-2e81-47b7-8397-44ba081d8ad6\") " pod="tigera-operator/tigera-operator-76ff79f7fd-mmcl8" Jun 25 16:23:20.689999 kubelet[2299]: E0625 16:23:20.689847 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:20.690739 containerd[1286]: time="2024-06-25T16:23:20.690618780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2qb2z,Uid:e805e71c-3111-434a-96be-5f733e95a2be,Namespace:kube-system,Attempt:0,}" Jun 25 16:23:20.706628 containerd[1286]: time="2024-06-25T16:23:20.706544872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-mmcl8,Uid:faf1d568-2e81-47b7-8397-44ba081d8ad6,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:23:20.721316 containerd[1286]: time="2024-06-25T16:23:20.721211580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:20.721491 containerd[1286]: time="2024-06-25T16:23:20.721284699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:20.721491 containerd[1286]: time="2024-06-25T16:23:20.721311020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:20.721491 containerd[1286]: time="2024-06-25T16:23:20.721331820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:20.736200 containerd[1286]: time="2024-06-25T16:23:20.735872055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:20.736200 containerd[1286]: time="2024-06-25T16:23:20.735955595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:20.736200 containerd[1286]: time="2024-06-25T16:23:20.735984380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:20.736200 containerd[1286]: time="2024-06-25T16:23:20.736011672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:20.744353 systemd[1]: Started cri-containerd-88da55bd8a839bee1e224e31c7973054bc4280c721a0c2156884258ff2f55c18.scope - libcontainer container 88da55bd8a839bee1e224e31c7973054bc4280c721a0c2156884258ff2f55c18. Jun 25 16:23:20.759342 systemd[1]: Started cri-containerd-7b50989509dc558547791ade2875835dc5facde43bca06b507661c8c8068a2ae.scope - libcontainer container 7b50989509dc558547791ade2875835dc5facde43bca06b507661c8c8068a2ae. Jun 25 16:23:20.760000 audit: BPF prog-id=102 op=LOAD Jun 25 16:23:20.762574 kernel: kauditd_printk_skb: 10 callbacks suppressed Jun 25 16:23:20.762633 kernel: audit: type=1334 audit(1719332600.760:387): prog-id=102 op=LOAD Jun 25 16:23:20.761000 audit: BPF prog-id=103 op=LOAD Jun 25 16:23:20.764716 kernel: audit: type=1334 audit(1719332600.761:388): prog-id=103 op=LOAD Jun 25 16:23:20.764793 kernel: audit: type=1300 audit(1719332600.761:388): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2410 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.761000 audit[2421]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2410 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838646135356264386138333962656531653232346533316337393733 Jun 25 16:23:20.773337 kernel: audit: type=1327 audit(1719332600.761:388): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838646135356264386138333962656531653232346533316337393733 Jun 25 16:23:20.773410 kernel: audit: type=1334 audit(1719332600.761:389): prog-id=104 op=LOAD Jun 25 16:23:20.761000 audit: BPF prog-id=104 op=LOAD Jun 25 16:23:20.761000 audit[2421]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2410 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.778816 kernel: audit: type=1300 audit(1719332600.761:389): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2410 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.783295 kernel: audit: type=1327 audit(1719332600.761:389): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838646135356264386138333962656531653232346533316337393733 Jun 25 16:23:20.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838646135356264386138333962656531653232346533316337393733 Jun 25 16:23:20.784593 kernel: audit: type=1334 audit(1719332600.761:390): prog-id=104 op=UNLOAD Jun 25 16:23:20.761000 audit: BPF prog-id=104 op=UNLOAD Jun 25 16:23:20.785784 kernel: audit: type=1334 audit(1719332600.761:391): prog-id=103 op=UNLOAD Jun 25 16:23:20.761000 audit: BPF prog-id=103 op=UNLOAD Jun 25 16:23:20.761000 audit: BPF prog-id=105 op=LOAD Jun 25 16:23:20.786913 kernel: audit: type=1334 audit(1719332600.761:392): prog-id=105 op=LOAD Jun 25 16:23:20.761000 audit[2421]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2410 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838646135356264386138333962656531653232346533316337393733 Jun 25 16:23:20.768000 audit: BPF prog-id=106 op=LOAD Jun 25 16:23:20.769000 audit: BPF prog-id=107 op=LOAD Jun 25 16:23:20.769000 audit[2448]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2434 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.787309 containerd[1286]: time="2024-06-25T16:23:20.787236789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2qb2z,Uid:e805e71c-3111-434a-96be-5f733e95a2be,Namespace:kube-system,Attempt:0,} returns sandbox id \"88da55bd8a839bee1e224e31c7973054bc4280c721a0c2156884258ff2f55c18\"" Jun 25 16:23:20.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762353039383935303964633535383534373739316164653238373538 Jun 25 16:23:20.769000 audit: BPF prog-id=108 op=LOAD Jun 25 16:23:20.769000 audit[2448]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2434 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762353039383935303964633535383534373739316164653238373538 Jun 25 16:23:20.769000 audit: BPF prog-id=108 op=UNLOAD Jun 25 16:23:20.769000 audit: BPF prog-id=107 op=UNLOAD Jun 25 16:23:20.769000 audit: BPF prog-id=109 op=LOAD Jun 25 16:23:20.769000 audit[2448]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2434 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:20.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762353039383935303964633535383534373739316164653238373538 Jun 25 16:23:20.788356 kubelet[2299]: E0625 16:23:20.788300 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:20.790783 containerd[1286]: time="2024-06-25T16:23:20.790751227Z" level=info msg="CreateContainer within sandbox \"88da55bd8a839bee1e224e31c7973054bc4280c721a0c2156884258ff2f55c18\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:23:20.811269 containerd[1286]: time="2024-06-25T16:23:20.811207477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-mmcl8,Uid:faf1d568-2e81-47b7-8397-44ba081d8ad6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7b50989509dc558547791ade2875835dc5facde43bca06b507661c8c8068a2ae\"" Jun 25 16:23:20.813787 containerd[1286]: time="2024-06-25T16:23:20.813761883Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:23:21.221806 containerd[1286]: time="2024-06-25T16:23:21.221746533Z" level=info msg="CreateContainer within sandbox \"88da55bd8a839bee1e224e31c7973054bc4280c721a0c2156884258ff2f55c18\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6eabd5bac5e851a9d431cb7c77652939eadb70335b253950f57447cea444367e\"" Jun 25 16:23:21.222255 containerd[1286]: time="2024-06-25T16:23:21.222221068Z" level=info msg="StartContainer for \"6eabd5bac5e851a9d431cb7c77652939eadb70335b253950f57447cea444367e\"" Jun 25 16:23:21.248404 systemd[1]: Started cri-containerd-6eabd5bac5e851a9d431cb7c77652939eadb70335b253950f57447cea444367e.scope - libcontainer container 6eabd5bac5e851a9d431cb7c77652939eadb70335b253950f57447cea444367e. Jun 25 16:23:21.263000 audit: BPF prog-id=110 op=LOAD Jun 25 16:23:21.263000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2410 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665616264356261633565383531613964343331636237633737363532 Jun 25 16:23:21.263000 audit: BPF prog-id=111 op=LOAD Jun 25 16:23:21.263000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2410 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665616264356261633565383531613964343331636237633737363532 Jun 25 16:23:21.263000 audit: BPF prog-id=111 op=UNLOAD Jun 25 16:23:21.263000 audit: BPF prog-id=110 op=UNLOAD Jun 25 16:23:21.263000 audit: BPF prog-id=112 op=LOAD Jun 25 16:23:21.263000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2410 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665616264356261633565383531613964343331636237633737363532 Jun 25 16:23:21.280286 containerd[1286]: time="2024-06-25T16:23:21.280148943Z" level=info msg="StartContainer for \"6eabd5bac5e851a9d431cb7c77652939eadb70335b253950f57447cea444367e\" returns successfully" Jun 25 16:23:21.338000 audit[2542]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.338000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdad483f60 a2=0 a3=7ffdad483f4c items=0 ppid=2502 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.338000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:23:21.339000 audit[2543]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.339000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc433ec680 a2=0 a3=7ffc433ec66c items=0 ppid=2502 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:23:21.340000 audit[2545]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.340000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc29dac920 a2=0 a3=7ffc29dac90c items=0 ppid=2502 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:23:21.341000 audit[2544]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.341000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda7fe5740 a2=0 a3=7ffda7fe572c items=0 ppid=2502 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:23:21.342000 audit[2546]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.342000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed1662420 a2=0 a3=7ffed166240c items=0 ppid=2502 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:23:21.342000 audit[2547]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.342000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0e913be0 a2=0 a3=7ffe0e913bcc items=0 ppid=2502 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.342000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:23:21.441000 audit[2548]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.441000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff13ebe9c0 a2=0 a3=7fff13ebe9ac items=0 ppid=2502 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:23:21.444000 audit[2550]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.444000 audit[2550]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcb1bbf010 a2=0 a3=7ffcb1bbeffc items=0 ppid=2502 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.444000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:23:21.449000 audit[2553]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.449000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd18ca23f0 a2=0 a3=7ffd18ca23dc items=0 ppid=2502 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:23:21.451000 audit[2554]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.451000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6a92d8f0 a2=0 a3=7fff6a92d8dc items=0 ppid=2502 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:23:21.454000 audit[2556]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.454000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc3054550 a2=0 a3=7fffc305453c items=0 ppid=2502 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:23:21.456000 audit[2557]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.456000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc26716820 a2=0 a3=7ffc2671680c items=0 ppid=2502 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:23:21.459000 audit[2559]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.459000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc17e1c820 a2=0 a3=7ffc17e1c80c items=0 ppid=2502 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:23:21.464000 audit[2562]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.464000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe5dcf4ab0 a2=0 a3=7ffe5dcf4a9c items=0 ppid=2502 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.464000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:23:21.465000 audit[2563]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.465000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc2913220 a2=0 a3=7ffdc291320c items=0 ppid=2502 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.465000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:23:21.467000 audit[2565]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.467000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc2bca7a0 a2=0 a3=7ffcc2bca78c items=0 ppid=2502 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.467000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:23:21.468000 audit[2566]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.468000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff44fe9d90 a2=0 a3=7fff44fe9d7c items=0 ppid=2502 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:23:21.471000 audit[2568]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.471000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff37931cb0 a2=0 a3=7fff37931c9c items=0 ppid=2502 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:23:21.474000 audit[2571]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.474000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffecd0130d0 a2=0 a3=7ffecd0130bc items=0 ppid=2502 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:23:21.479000 audit[2574]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.479000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd01276b00 a2=0 a3=7ffd01276aec items=0 ppid=2502 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.479000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:23:21.480000 audit[2575]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.480000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff5e5d7540 a2=0 a3=7fff5e5d752c items=0 ppid=2502 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:23:21.483000 audit[2577]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.483000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc2cffac60 a2=0 a3=7ffc2cffac4c items=0 ppid=2502 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:23:21.486000 audit[2580]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.486000 audit[2580]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc2415900 a2=0 a3=7fffc24158ec items=0 ppid=2502 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.486000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:23:21.487000 audit[2581]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.487000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc292d64b0 a2=0 a3=7ffc292d649c items=0 ppid=2502 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.487000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:23:21.490000 audit[2583]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:21.490000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd698151f0 a2=0 a3=7ffd698151dc items=0 ppid=2502 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:23:21.513000 audit[2589]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:21.513000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffcc6a0f310 a2=0 a3=7ffcc6a0f2fc items=0 ppid=2502 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.513000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:21.523000 audit[2589]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:21.523000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcc6a0f310 a2=0 a3=7ffcc6a0f2fc items=0 ppid=2502 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.523000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:21.525000 audit[2596]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.525000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffaf31ebc0 a2=0 a3=7fffaf31ebac items=0 ppid=2502 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.525000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:23:21.528000 audit[2598]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.528000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc3693f7f0 a2=0 a3=7ffc3693f7dc items=0 ppid=2502 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.528000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:23:21.531000 audit[2601]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.531000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffba8c5160 a2=0 a3=7fffba8c514c items=0 ppid=2502 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:23:21.535000 audit[2602]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.535000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8aaf5200 a2=0 a3=7ffc8aaf51ec items=0 ppid=2502 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.535000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:23:21.538000 audit[2604]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.538000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff41df9000 a2=0 a3=7fff41df8fec items=0 ppid=2502 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.538000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:23:21.539000 audit[2605]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2605 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.539000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc23305850 a2=0 a3=7ffc2330583c items=0 ppid=2502 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:23:21.543000 audit[2607]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.543000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffce3e78410 a2=0 a3=7ffce3e783fc items=0 ppid=2502 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:23:21.549000 audit[2610]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.549000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffc3e3fa00 a2=0 a3=7fffc3e3f9ec items=0 ppid=2502 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:23:21.550000 audit[2611]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.550000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde370bd40 a2=0 a3=7ffde370bd2c items=0 ppid=2502 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:23:21.555000 audit[2613]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2613 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.555000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd16a0c0c0 a2=0 a3=7ffd16a0c0ac items=0 ppid=2502 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:23:21.556000 audit[2614]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.556000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe236ac530 a2=0 a3=7ffe236ac51c items=0 ppid=2502 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:23:21.559000 audit[2616]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2616 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.559000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd905d3e10 a2=0 a3=7ffd905d3dfc items=0 ppid=2502 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.559000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:23:21.564000 audit[2619]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2619 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.564000 audit[2619]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee13b4190 a2=0 a3=7ffee13b417c items=0 ppid=2502 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.564000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:23:21.569000 audit[2622]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2622 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.569000 audit[2622]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6e245c60 a2=0 a3=7fff6e245c4c items=0 ppid=2502 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.569000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:23:21.570000 audit[2623]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2623 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.570000 audit[2623]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc51425250 a2=0 a3=7ffc5142523c items=0 ppid=2502 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.570000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:23:21.574000 audit[2625]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2625 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.574000 audit[2625]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffcf5eea550 a2=0 a3=7ffcf5eea53c items=0 ppid=2502 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:23:21.577000 audit[2628]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2628 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.577000 audit[2628]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe9b4a2960 a2=0 a3=7ffe9b4a294c items=0 ppid=2502 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:23:21.578000 audit[2629]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.578000 audit[2629]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6d5c9580 a2=0 a3=7fff6d5c956c items=0 ppid=2502 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:23:21.581000 audit[2631]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.581000 audit[2631]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff311d55d0 a2=0 a3=7fff311d55bc items=0 ppid=2502 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:23:21.582000 audit[2632]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.582000 audit[2632]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe99e82b70 a2=0 a3=7ffe99e82b5c items=0 ppid=2502 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:23:21.584000 audit[2634]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2634 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.584000 audit[2634]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe722b0190 a2=0 a3=7ffe722b017c items=0 ppid=2502 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.584000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:23:21.588000 audit[2637]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2637 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:23:21.588000 audit[2637]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcbbc82320 a2=0 a3=7ffcbbc8230c items=0 ppid=2502 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:23:21.592000 audit[2639]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2639 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:23:21.592000 audit[2639]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffff933cdd0 a2=0 a3=7ffff933cdbc items=0 ppid=2502 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.592000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:21.592000 audit[2639]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2639 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:23:21.592000 audit[2639]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffff933cdd0 a2=0 a3=7ffff933cdbc items=0 ppid=2502 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:21.592000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:21.967958 kubelet[2299]: E0625 16:23:21.967920 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:22.079757 kubelet[2299]: I0625 16:23:22.079696 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2qb2z" podStartSLOduration=3.079659483 podStartE2EDuration="3.079659483s" podCreationTimestamp="2024-06-25 16:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:23:22.079661778 +0000 UTC m=+18.236892100" watchObservedRunningTime="2024-06-25 16:23:22.079659483 +0000 UTC m=+18.236889815" Jun 25 16:23:22.969519 kubelet[2299]: E0625 16:23:22.969477 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:24.252096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2440398047.mount: Deactivated successfully. Jun 25 16:23:25.312887 containerd[1286]: time="2024-06-25T16:23:25.312829897Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:25.354412 containerd[1286]: time="2024-06-25T16:23:25.354317413Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076088" Jun 25 16:23:25.370479 containerd[1286]: time="2024-06-25T16:23:25.370382657Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:25.382150 containerd[1286]: time="2024-06-25T16:23:25.382089176Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:25.391045 containerd[1286]: time="2024-06-25T16:23:25.390942456Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:25.391791 containerd[1286]: time="2024-06-25T16:23:25.391745935Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 4.577877067s" Jun 25 16:23:25.391861 containerd[1286]: time="2024-06-25T16:23:25.391792443Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:23:25.394333 containerd[1286]: time="2024-06-25T16:23:25.394284596Z" level=info msg="CreateContainer within sandbox \"7b50989509dc558547791ade2875835dc5facde43bca06b507661c8c8068a2ae\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:23:25.562405 containerd[1286]: time="2024-06-25T16:23:25.562335449Z" level=info msg="CreateContainer within sandbox \"7b50989509dc558547791ade2875835dc5facde43bca06b507661c8c8068a2ae\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eea55ef714a40bbb979a7d22d7e844b7f6c260daf46f979dd40a8fd416dfc860\"" Jun 25 16:23:25.563178 containerd[1286]: time="2024-06-25T16:23:25.562804521Z" level=info msg="StartContainer for \"eea55ef714a40bbb979a7d22d7e844b7f6c260daf46f979dd40a8fd416dfc860\"" Jun 25 16:23:25.581213 systemd[1]: run-containerd-runc-k8s.io-eea55ef714a40bbb979a7d22d7e844b7f6c260daf46f979dd40a8fd416dfc860-runc.d3ZfRB.mount: Deactivated successfully. Jun 25 16:23:25.590263 systemd[1]: Started cri-containerd-eea55ef714a40bbb979a7d22d7e844b7f6c260daf46f979dd40a8fd416dfc860.scope - libcontainer container eea55ef714a40bbb979a7d22d7e844b7f6c260daf46f979dd40a8fd416dfc860. Jun 25 16:23:25.619000 audit: BPF prog-id=113 op=LOAD Jun 25 16:23:25.619000 audit: BPF prog-id=114 op=LOAD Jun 25 16:23:25.619000 audit[2656]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2434 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565613535656637313461343062626239373961376432326437653834 Jun 25 16:23:25.619000 audit: BPF prog-id=115 op=LOAD Jun 25 16:23:25.619000 audit[2656]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2434 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565613535656637313461343062626239373961376432326437653834 Jun 25 16:23:25.619000 audit: BPF prog-id=115 op=UNLOAD Jun 25 16:23:25.619000 audit: BPF prog-id=114 op=UNLOAD Jun 25 16:23:25.619000 audit: BPF prog-id=116 op=LOAD Jun 25 16:23:25.619000 audit[2656]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2434 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:25.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565613535656637313461343062626239373961376432326437653834 Jun 25 16:23:25.742718 containerd[1286]: time="2024-06-25T16:23:25.742645342Z" level=info msg="StartContainer for \"eea55ef714a40bbb979a7d22d7e844b7f6c260daf46f979dd40a8fd416dfc860\" returns successfully" Jun 25 16:23:25.985603 kubelet[2299]: I0625 16:23:25.985529 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-mmcl8" podStartSLOduration=1.405702911 podStartE2EDuration="5.985509055s" podCreationTimestamp="2024-06-25 16:23:20 +0000 UTC" firstStartedPulling="2024-06-25 16:23:20.812769689 +0000 UTC m=+16.970000021" lastFinishedPulling="2024-06-25 16:23:25.392575823 +0000 UTC m=+21.549806165" observedRunningTime="2024-06-25 16:23:25.985295418 +0000 UTC m=+22.142525770" watchObservedRunningTime="2024-06-25 16:23:25.985509055 +0000 UTC m=+22.142739407" Jun 25 16:23:28.457000 audit[2692]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2692 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:28.459707 kernel: kauditd_printk_skb: 190 callbacks suppressed Jun 25 16:23:28.459887 kernel: audit: type=1325 audit(1719332608.457:461): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2692 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:28.457000 audit[2692]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdea83f960 a2=0 a3=7ffdea83f94c items=0 ppid=2502 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:28.466244 kernel: audit: type=1300 audit(1719332608.457:461): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdea83f960 a2=0 a3=7ffdea83f94c items=0 ppid=2502 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:28.466302 kernel: audit: type=1327 audit(1719332608.457:461): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:28.457000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:28.458000 audit[2692]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2692 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:28.458000 audit[2692]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdea83f960 a2=0 a3=0 items=0 ppid=2502 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:28.476853 kernel: audit: type=1325 audit(1719332608.458:462): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2692 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:28.476978 kernel: audit: type=1300 audit(1719332608.458:462): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdea83f960 a2=0 a3=0 items=0 ppid=2502 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:28.477020 kernel: audit: type=1327 audit(1719332608.458:462): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:28.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:28.474000 audit[2694]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2694 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:28.474000 audit[2694]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff06845cf0 a2=0 a3=7fff06845cdc items=0 ppid=2502 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:28.494348 kernel: audit: type=1325 audit(1719332608.474:463): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2694 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:28.494409 kernel: audit: type=1300 audit(1719332608.474:463): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff06845cf0 a2=0 a3=7fff06845cdc items=0 ppid=2502 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:28.494439 kernel: audit: type=1327 audit(1719332608.474:463): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:28.474000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:28.483000 audit[2694]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2694 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:28.483000 audit[2694]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff06845cf0 a2=0 a3=0 items=0 ppid=2502 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:28.483000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:28.501112 kernel: audit: type=1325 audit(1719332608.483:464): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2694 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:28.576886 kubelet[2299]: I0625 16:23:28.576840 2299 topology_manager.go:215] "Topology Admit Handler" podUID="ecd508d0-ed8b-40f4-bf75-10322e63f686" podNamespace="calico-system" podName="calico-typha-d67d88d74-kstdz" Jun 25 16:23:28.583781 systemd[1]: Created slice kubepods-besteffort-podecd508d0_ed8b_40f4_bf75_10322e63f686.slice - libcontainer container kubepods-besteffort-podecd508d0_ed8b_40f4_bf75_10322e63f686.slice. Jun 25 16:23:28.608958 kubelet[2299]: I0625 16:23:28.608909 2299 topology_manager.go:215] "Topology Admit Handler" podUID="9704d188-1947-4266-a64e-781ce2068d2a" podNamespace="calico-system" podName="calico-node-v5qnp" Jun 25 16:23:28.613441 systemd[1]: Created slice kubepods-besteffort-pod9704d188_1947_4266_a64e_781ce2068d2a.slice - libcontainer container kubepods-besteffort-pod9704d188_1947_4266_a64e_781ce2068d2a.slice. Jun 25 16:23:28.668989 kubelet[2299]: I0625 16:23:28.668940 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-lib-modules\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.668989 kubelet[2299]: I0625 16:23:28.668987 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9704d188-1947-4266-a64e-781ce2068d2a-tigera-ca-bundle\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.669211 kubelet[2299]: I0625 16:23:28.669015 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-flexvol-driver-host\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.669211 kubelet[2299]: I0625 16:23:28.669038 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-var-run-calico\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.669211 kubelet[2299]: I0625 16:23:28.669058 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-var-lib-calico\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.669211 kubelet[2299]: I0625 16:23:28.669097 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwfj9\" (UniqueName: \"kubernetes.io/projected/ecd508d0-ed8b-40f4-bf75-10322e63f686-kube-api-access-mwfj9\") pod \"calico-typha-d67d88d74-kstdz\" (UID: \"ecd508d0-ed8b-40f4-bf75-10322e63f686\") " pod="calico-system/calico-typha-d67d88d74-kstdz" Jun 25 16:23:28.669211 kubelet[2299]: I0625 16:23:28.669120 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-policysync\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.669328 kubelet[2299]: I0625 16:23:28.669151 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ecd508d0-ed8b-40f4-bf75-10322e63f686-typha-certs\") pod \"calico-typha-d67d88d74-kstdz\" (UID: \"ecd508d0-ed8b-40f4-bf75-10322e63f686\") " pod="calico-system/calico-typha-d67d88d74-kstdz" Jun 25 16:23:28.669328 kubelet[2299]: I0625 16:23:28.669168 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvx5r\" (UniqueName: \"kubernetes.io/projected/9704d188-1947-4266-a64e-781ce2068d2a-kube-api-access-bvx5r\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.669328 kubelet[2299]: I0625 16:23:28.669189 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd508d0-ed8b-40f4-bf75-10322e63f686-tigera-ca-bundle\") pod \"calico-typha-d67d88d74-kstdz\" (UID: \"ecd508d0-ed8b-40f4-bf75-10322e63f686\") " pod="calico-system/calico-typha-d67d88d74-kstdz" Jun 25 16:23:28.669328 kubelet[2299]: I0625 16:23:28.669220 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-xtables-lock\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.669328 kubelet[2299]: I0625 16:23:28.669243 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9704d188-1947-4266-a64e-781ce2068d2a-node-certs\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.669435 kubelet[2299]: I0625 16:23:28.669263 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-net-dir\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.669435 kubelet[2299]: I0625 16:23:28.669302 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-bin-dir\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.669435 kubelet[2299]: I0625 16:23:28.669330 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-log-dir\") pod \"calico-node-v5qnp\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " pod="calico-system/calico-node-v5qnp" Jun 25 16:23:28.722400 kubelet[2299]: I0625 16:23:28.722264 2299 topology_manager.go:215] "Topology Admit Handler" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" podNamespace="calico-system" podName="csi-node-driver-8m25c" Jun 25 16:23:28.723440 kubelet[2299]: E0625 16:23:28.722564 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m25c" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" Jun 25 16:23:28.770065 kubelet[2299]: I0625 16:23:28.770012 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/85542427-f47c-46c9-a170-591e5c3b27fa-registration-dir\") pod \"csi-node-driver-8m25c\" (UID: \"85542427-f47c-46c9-a170-591e5c3b27fa\") " pod="calico-system/csi-node-driver-8m25c" Jun 25 16:23:28.770065 kubelet[2299]: I0625 16:23:28.770083 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85542427-f47c-46c9-a170-591e5c3b27fa-kubelet-dir\") pod \"csi-node-driver-8m25c\" (UID: \"85542427-f47c-46c9-a170-591e5c3b27fa\") " pod="calico-system/csi-node-driver-8m25c" Jun 25 16:23:28.770282 kubelet[2299]: I0625 16:23:28.770102 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/85542427-f47c-46c9-a170-591e5c3b27fa-socket-dir\") pod \"csi-node-driver-8m25c\" (UID: \"85542427-f47c-46c9-a170-591e5c3b27fa\") " pod="calico-system/csi-node-driver-8m25c" Jun 25 16:23:28.770282 kubelet[2299]: I0625 16:23:28.770119 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5d5n\" (UniqueName: \"kubernetes.io/projected/85542427-f47c-46c9-a170-591e5c3b27fa-kube-api-access-p5d5n\") pod \"csi-node-driver-8m25c\" (UID: \"85542427-f47c-46c9-a170-591e5c3b27fa\") " pod="calico-system/csi-node-driver-8m25c" Jun 25 16:23:28.770282 kubelet[2299]: I0625 16:23:28.770158 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/85542427-f47c-46c9-a170-591e5c3b27fa-varrun\") pod \"csi-node-driver-8m25c\" (UID: \"85542427-f47c-46c9-a170-591e5c3b27fa\") " pod="calico-system/csi-node-driver-8m25c" Jun 25 16:23:28.771224 kubelet[2299]: E0625 16:23:28.771207 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.771295 kubelet[2299]: W0625 16:23:28.771284 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.771349 kubelet[2299]: E0625 16:23:28.771339 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.772223 kubelet[2299]: E0625 16:23:28.772211 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.772288 kubelet[2299]: W0625 16:23:28.772279 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.772335 kubelet[2299]: E0625 16:23:28.772327 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.772971 kubelet[2299]: E0625 16:23:28.772956 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.773048 kubelet[2299]: W0625 16:23:28.773037 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.773134 kubelet[2299]: E0625 16:23:28.773121 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.773537 kubelet[2299]: E0625 16:23:28.773526 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.773599 kubelet[2299]: W0625 16:23:28.773591 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.773648 kubelet[2299]: E0625 16:23:28.773639 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.776059 kubelet[2299]: E0625 16:23:28.776039 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.776059 kubelet[2299]: W0625 16:23:28.776052 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.776178 kubelet[2299]: E0625 16:23:28.776158 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.776437 kubelet[2299]: E0625 16:23:28.776413 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.776437 kubelet[2299]: W0625 16:23:28.776432 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.776598 kubelet[2299]: E0625 16:23:28.776508 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.776598 kubelet[2299]: E0625 16:23:28.776557 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.776598 kubelet[2299]: W0625 16:23:28.776562 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.776677 kubelet[2299]: E0625 16:23:28.776626 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.776677 kubelet[2299]: E0625 16:23:28.776671 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.776677 kubelet[2299]: W0625 16:23:28.776676 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.776793 kubelet[2299]: E0625 16:23:28.776774 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.777397 kubelet[2299]: E0625 16:23:28.776946 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.777397 kubelet[2299]: W0625 16:23:28.776959 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.777397 kubelet[2299]: E0625 16:23:28.777015 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.777397 kubelet[2299]: E0625 16:23:28.777062 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.777397 kubelet[2299]: W0625 16:23:28.777086 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.777397 kubelet[2299]: E0625 16:23:28.777113 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.777397 kubelet[2299]: E0625 16:23:28.777304 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.777397 kubelet[2299]: W0625 16:23:28.777311 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.777397 kubelet[2299]: E0625 16:23:28.777321 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.777642 kubelet[2299]: E0625 16:23:28.777485 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.777642 kubelet[2299]: W0625 16:23:28.777494 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.777642 kubelet[2299]: E0625 16:23:28.777507 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.785856 kubelet[2299]: E0625 16:23:28.785834 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.785966 kubelet[2299]: W0625 16:23:28.785956 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.786354 kubelet[2299]: E0625 16:23:28.786062 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.786354 kubelet[2299]: E0625 16:23:28.786150 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.786354 kubelet[2299]: W0625 16:23:28.786158 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.786354 kubelet[2299]: E0625 16:23:28.786219 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.786884 kubelet[2299]: E0625 16:23:28.786853 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.786884 kubelet[2299]: W0625 16:23:28.786875 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.787241 kubelet[2299]: E0625 16:23:28.787221 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.787854 kubelet[2299]: E0625 16:23:28.787840 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.787854 kubelet[2299]: W0625 16:23:28.787851 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.787925 kubelet[2299]: E0625 16:23:28.787908 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.788004 kubelet[2299]: E0625 16:23:28.787992 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.788004 kubelet[2299]: W0625 16:23:28.788002 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.788060 kubelet[2299]: E0625 16:23:28.788045 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.788183 kubelet[2299]: E0625 16:23:28.788170 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.788183 kubelet[2299]: W0625 16:23:28.788180 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.788266 kubelet[2299]: E0625 16:23:28.788253 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.788363 kubelet[2299]: E0625 16:23:28.788351 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.788363 kubelet[2299]: W0625 16:23:28.788362 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.788408 kubelet[2299]: E0625 16:23:28.788400 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.788496 kubelet[2299]: E0625 16:23:28.788484 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.788496 kubelet[2299]: W0625 16:23:28.788495 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.788538 kubelet[2299]: E0625 16:23:28.788509 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.788721 kubelet[2299]: E0625 16:23:28.788707 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.788721 kubelet[2299]: W0625 16:23:28.788718 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.788775 kubelet[2299]: E0625 16:23:28.788727 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.871706 kubelet[2299]: E0625 16:23:28.871646 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.871706 kubelet[2299]: W0625 16:23:28.871676 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.871706 kubelet[2299]: E0625 16:23:28.871711 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.871984 kubelet[2299]: E0625 16:23:28.871956 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.871984 kubelet[2299]: W0625 16:23:28.871969 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.872047 kubelet[2299]: E0625 16:23:28.871986 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.872237 kubelet[2299]: E0625 16:23:28.872224 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.872344 kubelet[2299]: W0625 16:23:28.872296 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.872344 kubelet[2299]: E0625 16:23:28.872316 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.872574 kubelet[2299]: E0625 16:23:28.872520 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.872574 kubelet[2299]: W0625 16:23:28.872556 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.872574 kubelet[2299]: E0625 16:23:28.872569 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.872742 kubelet[2299]: E0625 16:23:28.872729 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.872742 kubelet[2299]: W0625 16:23:28.872740 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.872813 kubelet[2299]: E0625 16:23:28.872748 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.873014 kubelet[2299]: E0625 16:23:28.872974 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.873110 kubelet[2299]: W0625 16:23:28.873013 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.873983 kubelet[2299]: E0625 16:23:28.873740 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.874259 kubelet[2299]: E0625 16:23:28.874242 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.874450 kubelet[2299]: W0625 16:23:28.874259 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.874450 kubelet[2299]: E0625 16:23:28.874285 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.874772 kubelet[2299]: E0625 16:23:28.874610 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.874772 kubelet[2299]: W0625 16:23:28.874622 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.874772 kubelet[2299]: E0625 16:23:28.874733 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.875020 kubelet[2299]: E0625 16:23:28.875009 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.875158 kubelet[2299]: W0625 16:23:28.875108 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.875551 kubelet[2299]: E0625 16:23:28.875245 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.876346 kubelet[2299]: E0625 16:23:28.876282 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.876346 kubelet[2299]: W0625 16:23:28.876300 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.876488 kubelet[2299]: E0625 16:23:28.876470 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.876624 kubelet[2299]: E0625 16:23:28.876612 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.876667 kubelet[2299]: W0625 16:23:28.876624 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.876765 kubelet[2299]: E0625 16:23:28.876731 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.876829 kubelet[2299]: E0625 16:23:28.876818 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.876869 kubelet[2299]: W0625 16:23:28.876829 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.876899 kubelet[2299]: E0625 16:23:28.876883 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.876995 kubelet[2299]: E0625 16:23:28.876979 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.876995 kubelet[2299]: W0625 16:23:28.876992 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.877092 kubelet[2299]: E0625 16:23:28.877020 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.877202 kubelet[2299]: E0625 16:23:28.877188 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.877202 kubelet[2299]: W0625 16:23:28.877201 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.877280 kubelet[2299]: E0625 16:23:28.877230 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.877394 kubelet[2299]: E0625 16:23:28.877383 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.877423 kubelet[2299]: W0625 16:23:28.877393 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.877423 kubelet[2299]: E0625 16:23:28.877408 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.877577 kubelet[2299]: E0625 16:23:28.877561 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.877577 kubelet[2299]: W0625 16:23:28.877572 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.877670 kubelet[2299]: E0625 16:23:28.877587 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.877953 kubelet[2299]: E0625 16:23:28.877934 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.877953 kubelet[2299]: W0625 16:23:28.877951 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.878030 kubelet[2299]: E0625 16:23:28.877973 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.878201 kubelet[2299]: E0625 16:23:28.878187 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.878201 kubelet[2299]: W0625 16:23:28.878198 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.878276 kubelet[2299]: E0625 16:23:28.878266 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.878388 kubelet[2299]: E0625 16:23:28.878377 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.878427 kubelet[2299]: W0625 16:23:28.878387 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.878427 kubelet[2299]: E0625 16:23:28.878416 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.878599 kubelet[2299]: E0625 16:23:28.878589 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.878636 kubelet[2299]: W0625 16:23:28.878598 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.878636 kubelet[2299]: E0625 16:23:28.878625 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.878790 kubelet[2299]: E0625 16:23:28.878773 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.878790 kubelet[2299]: W0625 16:23:28.878785 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.878874 kubelet[2299]: E0625 16:23:28.878836 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.879222 kubelet[2299]: E0625 16:23:28.879063 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.879222 kubelet[2299]: W0625 16:23:28.879116 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.879222 kubelet[2299]: E0625 16:23:28.879189 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.879335 kubelet[2299]: E0625 16:23:28.879323 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.879335 kubelet[2299]: W0625 16:23:28.879333 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.879389 kubelet[2299]: E0625 16:23:28.879348 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.879560 kubelet[2299]: E0625 16:23:28.879536 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.879560 kubelet[2299]: W0625 16:23:28.879550 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.879560 kubelet[2299]: E0625 16:23:28.879563 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.879793 kubelet[2299]: E0625 16:23:28.879749 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.879793 kubelet[2299]: W0625 16:23:28.879758 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.879793 kubelet[2299]: E0625 16:23:28.879767 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.895629 kubelet[2299]: E0625 16:23:28.895576 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:28.897891 containerd[1286]: time="2024-06-25T16:23:28.896803784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d67d88d74-kstdz,Uid:ecd508d0-ed8b-40f4-bf75-10322e63f686,Namespace:calico-system,Attempt:0,}" Jun 25 16:23:28.909245 kubelet[2299]: E0625 16:23:28.909206 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:23:28.909456 kubelet[2299]: W0625 16:23:28.909439 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:23:28.909533 kubelet[2299]: E0625 16:23:28.909518 2299 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:23:28.916398 kubelet[2299]: E0625 16:23:28.916265 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:28.918631 containerd[1286]: time="2024-06-25T16:23:28.918323931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v5qnp,Uid:9704d188-1947-4266-a64e-781ce2068d2a,Namespace:calico-system,Attempt:0,}" Jun 25 16:23:29.296562 containerd[1286]: time="2024-06-25T16:23:29.295743896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:29.296562 containerd[1286]: time="2024-06-25T16:23:29.295815762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:29.296562 containerd[1286]: time="2024-06-25T16:23:29.295836742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:29.296562 containerd[1286]: time="2024-06-25T16:23:29.295854726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:29.297980 containerd[1286]: time="2024-06-25T16:23:29.297499571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:29.297980 containerd[1286]: time="2024-06-25T16:23:29.297840178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:29.297980 containerd[1286]: time="2024-06-25T16:23:29.297858934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:29.297980 containerd[1286]: time="2024-06-25T16:23:29.297871387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:29.323261 systemd[1]: Started cri-containerd-a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996.scope - libcontainer container a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996. Jun 25 16:23:29.334445 systemd[1]: Started cri-containerd-1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18.scope - libcontainer container 1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18. Jun 25 16:23:29.377000 audit: BPF prog-id=117 op=LOAD Jun 25 16:23:29.377000 audit: BPF prog-id=118 op=LOAD Jun 25 16:23:29.377000 audit[2786]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2771 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:29.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132666439633831363034333139396634393438373065373435353238 Jun 25 16:23:29.377000 audit: BPF prog-id=119 op=LOAD Jun 25 16:23:29.377000 audit[2786]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2771 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:29.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132666439633831363034333139396634393438373065373435353238 Jun 25 16:23:29.377000 audit: BPF prog-id=119 op=UNLOAD Jun 25 16:23:29.377000 audit: BPF prog-id=118 op=UNLOAD Jun 25 16:23:29.377000 audit: BPF prog-id=120 op=LOAD Jun 25 16:23:29.377000 audit[2786]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2771 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:29.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132666439633831363034333139396634393438373065373435353238 Jun 25 16:23:29.384000 audit: BPF prog-id=121 op=LOAD Jun 25 16:23:29.385000 audit: BPF prog-id=122 op=LOAD Jun 25 16:23:29.385000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2761 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:29.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643562616664303835613635363461396665316533646561306365 Jun 25 16:23:29.385000 audit: BPF prog-id=123 op=LOAD Jun 25 16:23:29.385000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2761 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:29.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643562616664303835613635363461396665316533646561306365 Jun 25 16:23:29.385000 audit: BPF prog-id=123 op=UNLOAD Jun 25 16:23:29.385000 audit: BPF prog-id=122 op=UNLOAD Jun 25 16:23:29.385000 audit: BPF prog-id=124 op=LOAD Jun 25 16:23:29.385000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2761 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:29.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643562616664303835613635363461396665316533646561306365 Jun 25 16:23:29.398280 containerd[1286]: time="2024-06-25T16:23:29.398234512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v5qnp,Uid:9704d188-1947-4266-a64e-781ce2068d2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\"" Jun 25 16:23:29.402032 kubelet[2299]: E0625 16:23:29.402008 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:29.406968 containerd[1286]: time="2024-06-25T16:23:29.406934690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:23:29.428588 containerd[1286]: time="2024-06-25T16:23:29.428549230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d67d88d74-kstdz,Uid:ecd508d0-ed8b-40f4-bf75-10322e63f686,Namespace:calico-system,Attempt:0,} returns sandbox id \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\"" Jun 25 16:23:29.429783 kubelet[2299]: E0625 16:23:29.429337 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:29.506000 audit[2832]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:29.506000 audit[2832]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc29660000 a2=0 a3=7ffc2965ffec items=0 ppid=2502 pid=2832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:29.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:29.507000 audit[2832]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:29.507000 audit[2832]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc29660000 a2=0 a3=0 items=0 ppid=2502 pid=2832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:29.507000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:29.919790 kubelet[2299]: E0625 16:23:29.919426 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m25c" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" Jun 25 16:23:31.919533 kubelet[2299]: E0625 16:23:31.919487 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m25c" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" Jun 25 16:23:32.577432 containerd[1286]: time="2024-06-25T16:23:32.577379434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:32.635365 containerd[1286]: time="2024-06-25T16:23:32.635283090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:23:32.728578 containerd[1286]: time="2024-06-25T16:23:32.728516588Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:32.788591 containerd[1286]: time="2024-06-25T16:23:32.788539657Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:32.857431 containerd[1286]: time="2024-06-25T16:23:32.857293523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:32.858086 containerd[1286]: time="2024-06-25T16:23:32.858015182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 3.45104208s" Jun 25 16:23:32.858151 containerd[1286]: time="2024-06-25T16:23:32.858087059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:23:32.860008 containerd[1286]: time="2024-06-25T16:23:32.859968780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:23:32.860919 containerd[1286]: time="2024-06-25T16:23:32.860888705Z" level=info msg="CreateContainer within sandbox \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:23:33.359351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1388843718.mount: Deactivated successfully. Jun 25 16:23:33.376243 containerd[1286]: time="2024-06-25T16:23:33.376168564Z" level=info msg="CreateContainer within sandbox \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093\"" Jun 25 16:23:33.377169 containerd[1286]: time="2024-06-25T16:23:33.377122895Z" level=info msg="StartContainer for \"a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093\"" Jun 25 16:23:33.427239 systemd[1]: Started cri-containerd-a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093.scope - libcontainer container a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093. Jun 25 16:23:33.437000 audit: BPF prog-id=125 op=LOAD Jun 25 16:23:33.437000 audit[2846]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2771 pid=2846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:33.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135633831373938366239636435613834623934303961663863393634 Jun 25 16:23:33.437000 audit: BPF prog-id=126 op=LOAD Jun 25 16:23:33.437000 audit[2846]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2771 pid=2846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:33.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135633831373938366239636435613834623934303961663863393634 Jun 25 16:23:33.437000 audit: BPF prog-id=126 op=UNLOAD Jun 25 16:23:33.437000 audit: BPF prog-id=125 op=UNLOAD Jun 25 16:23:33.437000 audit: BPF prog-id=127 op=LOAD Jun 25 16:23:33.437000 audit[2846]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2771 pid=2846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:33.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135633831373938366239636435613834623934303961663863393634 Jun 25 16:23:33.459366 systemd[1]: cri-containerd-a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093.scope: Deactivated successfully. Jun 25 16:23:33.463000 audit: BPF prog-id=127 op=UNLOAD Jun 25 16:23:33.465557 kernel: kauditd_printk_skb: 43 callbacks suppressed Jun 25 16:23:33.466089 kernel: audit: type=1334 audit(1719332613.463:484): prog-id=127 op=UNLOAD Jun 25 16:23:33.471270 containerd[1286]: time="2024-06-25T16:23:33.471223150Z" level=info msg="StartContainer for \"a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093\" returns successfully" Jun 25 16:23:33.489505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093-rootfs.mount: Deactivated successfully. Jun 25 16:23:33.564280 containerd[1286]: time="2024-06-25T16:23:33.564212770Z" level=info msg="shim disconnected" id=a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093 namespace=k8s.io Jun 25 16:23:33.564521 containerd[1286]: time="2024-06-25T16:23:33.564297451Z" level=warning msg="cleaning up after shim disconnected" id=a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093 namespace=k8s.io Jun 25 16:23:33.564521 containerd[1286]: time="2024-06-25T16:23:33.564321896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:23:33.919723 kubelet[2299]: E0625 16:23:33.919667 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m25c" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" Jun 25 16:23:33.998973 containerd[1286]: time="2024-06-25T16:23:33.998920239Z" level=info msg="StopPodSandbox for \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\"" Jun 25 16:23:33.999404 containerd[1286]: time="2024-06-25T16:23:33.998981245Z" level=info msg="Container to stop \"a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 16:23:34.004052 systemd[1]: cri-containerd-a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996.scope: Deactivated successfully. Jun 25 16:23:34.003000 audit: BPF prog-id=117 op=UNLOAD Jun 25 16:23:34.006095 kernel: audit: type=1334 audit(1719332614.003:485): prog-id=117 op=UNLOAD Jun 25 16:23:34.009000 audit: BPF prog-id=120 op=UNLOAD Jun 25 16:23:34.011124 kernel: audit: type=1334 audit(1719332614.009:486): prog-id=120 op=UNLOAD Jun 25 16:23:34.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.90:22-10.0.0.1:59810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:34.041504 systemd[1]: Started sshd@7-10.0.0.90:22-10.0.0.1:59810.service - OpenSSH per-connection server daemon (10.0.0.1:59810). Jun 25 16:23:34.051092 kernel: audit: type=1130 audit(1719332614.040:487): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.90:22-10.0.0.1:59810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:34.076000 audit[2918]: USER_ACCT pid=2918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:34.077600 sshd[2918]: Accepted publickey for core from 10.0.0.1 port 59810 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:23:34.081000 audit[2918]: CRED_ACQ pid=2918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:34.083382 sshd[2918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:34.086719 systemd-logind[1277]: New session 8 of user core. Jun 25 16:23:34.105153 kernel: audit: type=1101 audit(1719332614.076:488): pid=2918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:34.105354 kernel: audit: type=1103 audit(1719332614.081:489): pid=2918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:34.105397 kernel: audit: type=1006 audit(1719332614.081:490): pid=2918 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jun 25 16:23:34.081000 audit[2918]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd06b7b110 a2=3 a3=7f3196d81480 items=0 ppid=1 pid=2918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:34.111886 kernel: audit: type=1300 audit(1719332614.081:490): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd06b7b110 a2=3 a3=7f3196d81480 items=0 ppid=1 pid=2918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:34.111924 kernel: audit: type=1327 audit(1719332614.081:490): proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:34.081000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:34.118306 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:23:34.121000 audit[2918]: USER_START pid=2918 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:34.122000 audit[2920]: CRED_ACQ pid=2920 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:34.128096 kernel: audit: type=1105 audit(1719332614.121:491): pid=2918 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:34.128990 containerd[1286]: time="2024-06-25T16:23:34.128930774Z" level=info msg="shim disconnected" id=a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996 namespace=k8s.io Jun 25 16:23:34.128990 containerd[1286]: time="2024-06-25T16:23:34.128983243Z" level=warning msg="cleaning up after shim disconnected" id=a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996 namespace=k8s.io Jun 25 16:23:34.129165 containerd[1286]: time="2024-06-25T16:23:34.128993482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:23:34.140494 containerd[1286]: time="2024-06-25T16:23:34.140445671Z" level=info msg="TearDown network for sandbox \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\" successfully" Jun 25 16:23:34.140494 containerd[1286]: time="2024-06-25T16:23:34.140486568Z" level=info msg="StopPodSandbox for \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\" returns successfully" Jun 25 16:23:34.211699 kubelet[2299]: I0625 16:23:34.211570 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-bin-dir\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.211699 kubelet[2299]: I0625 16:23:34.211613 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-log-dir\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.211699 kubelet[2299]: I0625 16:23:34.211650 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-lib-modules\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.211699 kubelet[2299]: I0625 16:23:34.211657 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:23:34.211917 kubelet[2299]: I0625 16:23:34.211710 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:23:34.211917 kubelet[2299]: I0625 16:23:34.211718 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:23:34.211917 kubelet[2299]: I0625 16:23:34.211738 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-var-lib-calico\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.211917 kubelet[2299]: I0625 16:23:34.211759 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-net-dir\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.211917 kubelet[2299]: I0625 16:23:34.211799 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:23:34.212061 kubelet[2299]: I0625 16:23:34.211817 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:23:34.212061 kubelet[2299]: I0625 16:23:34.211836 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9704d188-1947-4266-a64e-781ce2068d2a-tigera-ca-bundle\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.212061 kubelet[2299]: I0625 16:23:34.211858 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-policysync\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.212322 kubelet[2299]: I0625 16:23:34.212297 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9704d188-1947-4266-a64e-781ce2068d2a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 16:23:34.212358 kubelet[2299]: I0625 16:23:34.212339 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-policysync" (OuterVolumeSpecName: "policysync") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:23:34.212382 kubelet[2299]: I0625 16:23:34.212366 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvx5r\" (UniqueName: \"kubernetes.io/projected/9704d188-1947-4266-a64e-781ce2068d2a-kube-api-access-bvx5r\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.212404 kubelet[2299]: I0625 16:23:34.212388 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-var-run-calico\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.212657 kubelet[2299]: I0625 16:23:34.212636 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-flexvol-driver-host\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.212719 kubelet[2299]: I0625 16:23:34.212687 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:23:34.212719 kubelet[2299]: I0625 16:23:34.212705 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:23:34.212782 kubelet[2299]: I0625 16:23:34.212667 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-xtables-lock\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.212782 kubelet[2299]: I0625 16:23:34.212753 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9704d188-1947-4266-a64e-781ce2068d2a-node-certs\") pod \"9704d188-1947-4266-a64e-781ce2068d2a\" (UID: \"9704d188-1947-4266-a64e-781ce2068d2a\") " Jun 25 16:23:34.212874 kubelet[2299]: I0625 16:23:34.212785 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:23:34.212874 kubelet[2299]: I0625 16:23:34.212834 2299 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.212874 kubelet[2299]: I0625 16:23:34.212849 2299 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.212874 kubelet[2299]: I0625 16:23:34.212861 2299 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.212874 kubelet[2299]: I0625 16:23:34.212870 2299 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.213041 kubelet[2299]: I0625 16:23:34.212881 2299 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9704d188-1947-4266-a64e-781ce2068d2a-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.213041 kubelet[2299]: I0625 16:23:34.212891 2299 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-policysync\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.213041 kubelet[2299]: I0625 16:23:34.212901 2299 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.213041 kubelet[2299]: I0625 16:23:34.212911 2299 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.213041 kubelet[2299]: I0625 16:23:34.212922 2299 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.213041 kubelet[2299]: I0625 16:23:34.212932 2299 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9704d188-1947-4266-a64e-781ce2068d2a-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.216800 kubelet[2299]: I0625 16:23:34.215913 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9704d188-1947-4266-a64e-781ce2068d2a-kube-api-access-bvx5r" (OuterVolumeSpecName: "kube-api-access-bvx5r") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "kube-api-access-bvx5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 16:23:34.217379 kubelet[2299]: I0625 16:23:34.217351 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9704d188-1947-4266-a64e-781ce2068d2a-node-certs" (OuterVolumeSpecName: "node-certs") pod "9704d188-1947-4266-a64e-781ce2068d2a" (UID: "9704d188-1947-4266-a64e-781ce2068d2a"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 16:23:34.246940 sshd[2918]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:34.246000 audit[2918]: USER_END pid=2918 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:34.247000 audit[2918]: CRED_DISP pid=2918 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:34.249496 systemd[1]: sshd@7-10.0.0.90:22-10.0.0.1:59810.service: Deactivated successfully. Jun 25 16:23:34.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.90:22-10.0.0.1:59810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:34.250235 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:23:34.250851 systemd-logind[1277]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:23:34.251596 systemd-logind[1277]: Removed session 8. Jun 25 16:23:34.313254 kubelet[2299]: I0625 16:23:34.313209 2299 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9704d188-1947-4266-a64e-781ce2068d2a-node-certs\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.313254 kubelet[2299]: I0625 16:23:34.313246 2299 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bvx5r\" (UniqueName: \"kubernetes.io/projected/9704d188-1947-4266-a64e-781ce2068d2a-kube-api-access-bvx5r\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:34.353574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996-rootfs.mount: Deactivated successfully. Jun 25 16:23:34.353694 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996-shm.mount: Deactivated successfully. Jun 25 16:23:34.353749 systemd[1]: var-lib-kubelet-pods-9704d188\x2d1947\x2d4266\x2da64e\x2d781ce2068d2a-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 16:23:34.353800 systemd[1]: var-lib-kubelet-pods-9704d188\x2d1947\x2d4266\x2da64e\x2d781ce2068d2a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbvx5r.mount: Deactivated successfully. Jun 25 16:23:34.996507 kubelet[2299]: I0625 16:23:34.996463 2299 scope.go:117] "RemoveContainer" containerID="a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093" Jun 25 16:23:34.999136 containerd[1286]: time="2024-06-25T16:23:34.998277930Z" level=info msg="RemoveContainer for \"a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093\"" Jun 25 16:23:35.008151 systemd[1]: Removed slice kubepods-besteffort-pod9704d188_1947_4266_a64e_781ce2068d2a.slice - libcontainer container kubepods-besteffort-pod9704d188_1947_4266_a64e_781ce2068d2a.slice. Jun 25 16:23:35.073740 containerd[1286]: time="2024-06-25T16:23:35.073659876Z" level=info msg="RemoveContainer for \"a5c817986b9cd5a84b9409af8c964d3f494c38026aa4b023efb53a23e474a093\" returns successfully" Jun 25 16:23:35.113000 audit[2957]: NETFILTER_CFG table=filter:95 family=2 entries=16 op=nft_register_rule pid=2957 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:35.113000 audit[2957]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd8066c290 a2=0 a3=7ffd8066c27c items=0 ppid=2502 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:35.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:35.118441 kubelet[2299]: I0625 16:23:35.115885 2299 topology_manager.go:215] "Topology Admit Handler" podUID="47f8b684-9ff1-4125-b993-bc837ce4c390" podNamespace="calico-system" podName="calico-node-gqfmt" Jun 25 16:23:35.118441 kubelet[2299]: E0625 16:23:35.115949 2299 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9704d188-1947-4266-a64e-781ce2068d2a" containerName="flexvol-driver" Jun 25 16:23:35.118441 kubelet[2299]: I0625 16:23:35.116371 2299 memory_manager.go:354] "RemoveStaleState removing state" podUID="9704d188-1947-4266-a64e-781ce2068d2a" containerName="flexvol-driver" Jun 25 16:23:35.121609 systemd[1]: Created slice kubepods-besteffort-pod47f8b684_9ff1_4125_b993_bc837ce4c390.slice - libcontainer container kubepods-besteffort-pod47f8b684_9ff1_4125_b993_bc837ce4c390.slice. Jun 25 16:23:35.113000 audit[2957]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2957 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:35.113000 audit[2957]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8066c290 a2=0 a3=0 items=0 ppid=2502 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:35.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:35.218956 kubelet[2299]: I0625 16:23:35.218911 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47f8b684-9ff1-4125-b993-bc837ce4c390-var-lib-calico\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.218956 kubelet[2299]: I0625 16:23:35.218959 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/47f8b684-9ff1-4125-b993-bc837ce4c390-policysync\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.219182 kubelet[2299]: I0625 16:23:35.218982 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47f8b684-9ff1-4125-b993-bc837ce4c390-lib-modules\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.219182 kubelet[2299]: I0625 16:23:35.218996 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/47f8b684-9ff1-4125-b993-bc837ce4c390-cni-net-dir\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.219182 kubelet[2299]: I0625 16:23:35.219012 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/47f8b684-9ff1-4125-b993-bc837ce4c390-var-run-calico\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.219182 kubelet[2299]: I0625 16:23:35.219026 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47f8b684-9ff1-4125-b993-bc837ce4c390-xtables-lock\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.219182 kubelet[2299]: I0625 16:23:35.219042 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/47f8b684-9ff1-4125-b993-bc837ce4c390-flexvol-driver-host\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.219353 kubelet[2299]: I0625 16:23:35.219059 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/47f8b684-9ff1-4125-b993-bc837ce4c390-cni-log-dir\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.219353 kubelet[2299]: I0625 16:23:35.219088 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wvbl\" (UniqueName: \"kubernetes.io/projected/47f8b684-9ff1-4125-b993-bc837ce4c390-kube-api-access-9wvbl\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.219353 kubelet[2299]: I0625 16:23:35.219104 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f8b684-9ff1-4125-b993-bc837ce4c390-tigera-ca-bundle\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.219353 kubelet[2299]: I0625 16:23:35.219118 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/47f8b684-9ff1-4125-b993-bc837ce4c390-node-certs\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.219353 kubelet[2299]: I0625 16:23:35.219133 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/47f8b684-9ff1-4125-b993-bc837ce4c390-cni-bin-dir\") pod \"calico-node-gqfmt\" (UID: \"47f8b684-9ff1-4125-b993-bc837ce4c390\") " pod="calico-system/calico-node-gqfmt" Jun 25 16:23:35.259204 containerd[1286]: time="2024-06-25T16:23:35.259054891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:35.260151 containerd[1286]: time="2024-06-25T16:23:35.260093631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:23:35.262058 containerd[1286]: time="2024-06-25T16:23:35.261930063Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:35.263579 containerd[1286]: time="2024-06-25T16:23:35.263549733Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:35.265185 containerd[1286]: time="2024-06-25T16:23:35.265145268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:35.265785 containerd[1286]: time="2024-06-25T16:23:35.265730779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.405715791s" Jun 25 16:23:35.265785 containerd[1286]: time="2024-06-25T16:23:35.265764792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:23:35.272295 containerd[1286]: time="2024-06-25T16:23:35.272251711Z" level=info msg="CreateContainer within sandbox \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:23:35.284855 containerd[1286]: time="2024-06-25T16:23:35.284797706Z" level=info msg="CreateContainer within sandbox \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\"" Jun 25 16:23:35.285380 containerd[1286]: time="2024-06-25T16:23:35.285347589Z" level=info msg="StartContainer for \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\"" Jun 25 16:23:35.315361 systemd[1]: Started cri-containerd-f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4.scope - libcontainer container f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4. Jun 25 16:23:35.326000 audit: BPF prog-id=128 op=LOAD Jun 25 16:23:35.326000 audit: BPF prog-id=129 op=LOAD Jun 25 16:23:35.326000 audit[2967]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a5988 a2=78 a3=0 items=0 ppid=2761 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:35.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630306461643362303930393032626235363833636538333162353933 Jun 25 16:23:35.326000 audit: BPF prog-id=130 op=LOAD Jun 25 16:23:35.326000 audit[2967]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a5720 a2=78 a3=0 items=0 ppid=2761 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:35.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630306461643362303930393032626235363833636538333162353933 Jun 25 16:23:35.326000 audit: BPF prog-id=130 op=UNLOAD Jun 25 16:23:35.326000 audit: BPF prog-id=129 op=UNLOAD Jun 25 16:23:35.326000 audit: BPF prog-id=131 op=LOAD Jun 25 16:23:35.326000 audit[2967]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a5be0 a2=78 a3=0 items=0 ppid=2761 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:35.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630306461643362303930393032626235363833636538333162353933 Jun 25 16:23:35.357227 containerd[1286]: time="2024-06-25T16:23:35.354992136Z" level=info msg="StartContainer for \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\" returns successfully" Jun 25 16:23:35.428393 kubelet[2299]: E0625 16:23:35.428298 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:35.429063 containerd[1286]: time="2024-06-25T16:23:35.429025364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gqfmt,Uid:47f8b684-9ff1-4125-b993-bc837ce4c390,Namespace:calico-system,Attempt:0,}" Jun 25 16:23:35.458853 containerd[1286]: time="2024-06-25T16:23:35.458699653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:35.458853 containerd[1286]: time="2024-06-25T16:23:35.458795204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:35.459128 containerd[1286]: time="2024-06-25T16:23:35.458817667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:35.459128 containerd[1286]: time="2024-06-25T16:23:35.458830741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:35.480210 systemd[1]: Started cri-containerd-80fb8189d71ccf3b6a41f0e9434ddcc1db9a45a07436cd62cebfcac13fbd6163.scope - libcontainer container 80fb8189d71ccf3b6a41f0e9434ddcc1db9a45a07436cd62cebfcac13fbd6163. Jun 25 16:23:35.487000 audit: BPF prog-id=132 op=LOAD Jun 25 16:23:35.487000 audit: BPF prog-id=133 op=LOAD Jun 25 16:23:35.487000 audit[3017]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3008 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:35.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830666238313839643731636366336236613431663065393433346464 Jun 25 16:23:35.487000 audit: BPF prog-id=134 op=LOAD Jun 25 16:23:35.487000 audit[3017]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3008 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:35.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830666238313839643731636366336236613431663065393433346464 Jun 25 16:23:35.488000 audit: BPF prog-id=134 op=UNLOAD Jun 25 16:23:35.488000 audit: BPF prog-id=133 op=UNLOAD Jun 25 16:23:35.488000 audit: BPF prog-id=135 op=LOAD Jun 25 16:23:35.488000 audit[3017]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3008 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:35.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830666238313839643731636366336236613431663065393433346464 Jun 25 16:23:35.498133 containerd[1286]: time="2024-06-25T16:23:35.498089080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gqfmt,Uid:47f8b684-9ff1-4125-b993-bc837ce4c390,Namespace:calico-system,Attempt:0,} returns sandbox id \"80fb8189d71ccf3b6a41f0e9434ddcc1db9a45a07436cd62cebfcac13fbd6163\"" Jun 25 16:23:35.503268 kubelet[2299]: E0625 16:23:35.503243 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:35.506248 containerd[1286]: time="2024-06-25T16:23:35.506217901Z" level=info msg="CreateContainer within sandbox \"80fb8189d71ccf3b6a41f0e9434ddcc1db9a45a07436cd62cebfcac13fbd6163\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:23:35.919254 kubelet[2299]: E0625 16:23:35.919202 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m25c" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" Jun 25 16:23:35.923304 kubelet[2299]: I0625 16:23:35.923282 2299 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9704d188-1947-4266-a64e-781ce2068d2a" path="/var/lib/kubelet/pods/9704d188-1947-4266-a64e-781ce2068d2a/volumes" Jun 25 16:23:35.999740 containerd[1286]: time="2024-06-25T16:23:35.999697777Z" level=info msg="StopContainer for \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\" with timeout 300 (s)" Jun 25 16:23:36.000915 containerd[1286]: time="2024-06-25T16:23:36.000890057Z" level=info msg="Stop container \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\" with signal terminated" Jun 25 16:23:36.006000 audit: BPF prog-id=128 op=UNLOAD Jun 25 16:23:36.007374 systemd[1]: cri-containerd-f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4.scope: Deactivated successfully. Jun 25 16:23:36.012000 audit: BPF prog-id=131 op=UNLOAD Jun 25 16:23:36.102475 kubelet[2299]: I0625 16:23:36.102344 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d67d88d74-kstdz" podStartSLOduration=2.265943509 podStartE2EDuration="8.102329413s" podCreationTimestamp="2024-06-25 16:23:28 +0000 UTC" firstStartedPulling="2024-06-25 16:23:29.430114293 +0000 UTC m=+25.587344625" lastFinishedPulling="2024-06-25 16:23:35.266500197 +0000 UTC m=+31.423730529" observedRunningTime="2024-06-25 16:23:36.101850485 +0000 UTC m=+32.259080817" watchObservedRunningTime="2024-06-25 16:23:36.102329413 +0000 UTC m=+32.259559735" Jun 25 16:23:36.354371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4-rootfs.mount: Deactivated successfully. Jun 25 16:23:36.478000 audit[3060]: NETFILTER_CFG table=filter:97 family=2 entries=15 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:36.478000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffcb1a5e350 a2=0 a3=7ffcb1a5e33c items=0 ppid=2502 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:36.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:36.479000 audit[3060]: NETFILTER_CFG table=nat:98 family=2 entries=19 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:36.479000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcb1a5e350 a2=0 a3=7ffcb1a5e33c items=0 ppid=2502 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:36.479000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:37.245120 containerd[1286]: time="2024-06-25T16:23:37.245040053Z" level=info msg="shim disconnected" id=f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4 namespace=k8s.io Jun 25 16:23:37.245120 containerd[1286]: time="2024-06-25T16:23:37.245115556Z" level=warning msg="cleaning up after shim disconnected" id=f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4 namespace=k8s.io Jun 25 16:23:37.245120 containerd[1286]: time="2024-06-25T16:23:37.245125986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:23:37.614990 containerd[1286]: time="2024-06-25T16:23:37.614712566Z" level=info msg="StopContainer for \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\" returns successfully" Jun 25 16:23:37.615369 containerd[1286]: time="2024-06-25T16:23:37.615332581Z" level=info msg="StopPodSandbox for \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\"" Jun 25 16:23:37.615445 containerd[1286]: time="2024-06-25T16:23:37.615418384Z" level=info msg="Container to stop \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 16:23:37.617308 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18-shm.mount: Deactivated successfully. Jun 25 16:23:37.621145 systemd[1]: cri-containerd-1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18.scope: Deactivated successfully. Jun 25 16:23:37.620000 audit: BPF prog-id=121 op=UNLOAD Jun 25 16:23:37.626000 audit: BPF prog-id=124 op=UNLOAD Jun 25 16:23:37.639968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18-rootfs.mount: Deactivated successfully. Jun 25 16:23:37.649485 containerd[1286]: time="2024-06-25T16:23:37.649441504Z" level=info msg="CreateContainer within sandbox \"80fb8189d71ccf3b6a41f0e9434ddcc1db9a45a07436cd62cebfcac13fbd6163\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"753f1088822acd578a2f50f275c2384cea2df3999e235923f1644a085000d410\"" Jun 25 16:23:37.650382 containerd[1286]: time="2024-06-25T16:23:37.649952002Z" level=info msg="StartContainer for \"753f1088822acd578a2f50f275c2384cea2df3999e235923f1644a085000d410\"" Jun 25 16:23:37.769218 systemd[1]: Started cri-containerd-753f1088822acd578a2f50f275c2384cea2df3999e235923f1644a085000d410.scope - libcontainer container 753f1088822acd578a2f50f275c2384cea2df3999e235923f1644a085000d410. Jun 25 16:23:37.778000 audit: BPF prog-id=136 op=LOAD Jun 25 16:23:37.778000 audit[3102]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3008 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:37.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735336631303838383232616364353738613266353066323735633233 Jun 25 16:23:37.778000 audit: BPF prog-id=137 op=LOAD Jun 25 16:23:37.778000 audit[3102]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3008 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:37.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735336631303838383232616364353738613266353066323735633233 Jun 25 16:23:37.778000 audit: BPF prog-id=137 op=UNLOAD Jun 25 16:23:37.778000 audit: BPF prog-id=136 op=UNLOAD Jun 25 16:23:37.778000 audit: BPF prog-id=138 op=LOAD Jun 25 16:23:37.778000 audit[3102]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3008 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:37.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735336631303838383232616364353738613266353066323735633233 Jun 25 16:23:37.802105 systemd[1]: cri-containerd-753f1088822acd578a2f50f275c2384cea2df3999e235923f1644a085000d410.scope: Deactivated successfully. Jun 25 16:23:37.806000 audit: BPF prog-id=138 op=UNLOAD Jun 25 16:23:37.861829 containerd[1286]: time="2024-06-25T16:23:37.861749678Z" level=info msg="StartContainer for \"753f1088822acd578a2f50f275c2384cea2df3999e235923f1644a085000d410\" returns successfully" Jun 25 16:23:37.863788 containerd[1286]: time="2024-06-25T16:23:37.863716555Z" level=info msg="shim disconnected" id=1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18 namespace=k8s.io Jun 25 16:23:37.863788 containerd[1286]: time="2024-06-25T16:23:37.863782540Z" level=warning msg="cleaning up after shim disconnected" id=1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18 namespace=k8s.io Jun 25 16:23:37.863898 containerd[1286]: time="2024-06-25T16:23:37.863793851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:23:37.879918 containerd[1286]: time="2024-06-25T16:23:37.879850877Z" level=warning msg="cleanup warnings time=\"2024-06-25T16:23:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 16:23:37.881161 containerd[1286]: time="2024-06-25T16:23:37.881114652Z" level=info msg="TearDown network for sandbox \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\" successfully" Jun 25 16:23:37.881320 containerd[1286]: time="2024-06-25T16:23:37.881239530Z" level=info msg="StopPodSandbox for \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\" returns successfully" Jun 25 16:23:37.919795 kubelet[2299]: E0625 16:23:37.919687 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m25c" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" Jun 25 16:23:37.941594 kubelet[2299]: I0625 16:23:37.941556 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd508d0-ed8b-40f4-bf75-10322e63f686-tigera-ca-bundle\") pod \"ecd508d0-ed8b-40f4-bf75-10322e63f686\" (UID: \"ecd508d0-ed8b-40f4-bf75-10322e63f686\") " Jun 25 16:23:37.941594 kubelet[2299]: I0625 16:23:37.941590 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ecd508d0-ed8b-40f4-bf75-10322e63f686-typha-certs\") pod \"ecd508d0-ed8b-40f4-bf75-10322e63f686\" (UID: \"ecd508d0-ed8b-40f4-bf75-10322e63f686\") " Jun 25 16:23:37.941800 kubelet[2299]: I0625 16:23:37.941611 2299 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwfj9\" (UniqueName: \"kubernetes.io/projected/ecd508d0-ed8b-40f4-bf75-10322e63f686-kube-api-access-mwfj9\") pod \"ecd508d0-ed8b-40f4-bf75-10322e63f686\" (UID: \"ecd508d0-ed8b-40f4-bf75-10322e63f686\") " Jun 25 16:23:37.944229 kubelet[2299]: I0625 16:23:37.944190 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd508d0-ed8b-40f4-bf75-10322e63f686-kube-api-access-mwfj9" (OuterVolumeSpecName: "kube-api-access-mwfj9") pod "ecd508d0-ed8b-40f4-bf75-10322e63f686" (UID: "ecd508d0-ed8b-40f4-bf75-10322e63f686"). InnerVolumeSpecName "kube-api-access-mwfj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 16:23:38.004145 kubelet[2299]: I0625 16:23:38.004110 2299 scope.go:117] "RemoveContainer" containerID="f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4" Jun 25 16:23:38.005022 containerd[1286]: time="2024-06-25T16:23:38.004954004Z" level=info msg="RemoveContainer for \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\"" Jun 25 16:23:38.005797 kubelet[2299]: E0625 16:23:38.005766 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:38.013135 kubelet[2299]: I0625 16:23:38.013102 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd508d0-ed8b-40f4-bf75-10322e63f686-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "ecd508d0-ed8b-40f4-bf75-10322e63f686" (UID: "ecd508d0-ed8b-40f4-bf75-10322e63f686"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 16:23:38.015900 kubelet[2299]: I0625 16:23:38.015868 2299 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd508d0-ed8b-40f4-bf75-10322e63f686-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ecd508d0-ed8b-40f4-bf75-10322e63f686" (UID: "ecd508d0-ed8b-40f4-bf75-10322e63f686"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 16:23:38.042162 kubelet[2299]: I0625 16:23:38.042113 2299 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mwfj9\" (UniqueName: \"kubernetes.io/projected/ecd508d0-ed8b-40f4-bf75-10322e63f686-kube-api-access-mwfj9\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:38.042162 kubelet[2299]: I0625 16:23:38.042162 2299 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd508d0-ed8b-40f4-bf75-10322e63f686-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:38.042267 kubelet[2299]: I0625 16:23:38.042245 2299 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ecd508d0-ed8b-40f4-bf75-10322e63f686-typha-certs\") on node \"localhost\" DevicePath \"\"" Jun 25 16:23:38.127000 audit[3160]: NETFILTER_CFG table=filter:99 family=2 entries=15 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:38.127000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffddbdfb410 a2=0 a3=7ffddbdfb3fc items=0 ppid=2502 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.127000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:38.128000 audit[3160]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_unregister_chain pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:38.128000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=2956 a0=3 a1=7ffddbdfb410 a2=0 a3=0 items=0 ppid=2502 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:38.232006 containerd[1286]: time="2024-06-25T16:23:38.231929883Z" level=info msg="shim disconnected" id=753f1088822acd578a2f50f275c2384cea2df3999e235923f1644a085000d410 namespace=k8s.io Jun 25 16:23:38.232006 containerd[1286]: time="2024-06-25T16:23:38.231984015Z" level=warning msg="cleaning up after shim disconnected" id=753f1088822acd578a2f50f275c2384cea2df3999e235923f1644a085000d410 namespace=k8s.io Jun 25 16:23:38.232006 containerd[1286]: time="2024-06-25T16:23:38.231992020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:23:38.307597 systemd[1]: Removed slice kubepods-besteffort-podecd508d0_ed8b_40f4_bf75_10322e63f686.slice - libcontainer container kubepods-besteffort-podecd508d0_ed8b_40f4_bf75_10322e63f686.slice. Jun 25 16:23:38.370792 containerd[1286]: time="2024-06-25T16:23:38.370727129Z" level=info msg="RemoveContainer for \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\" returns successfully" Jun 25 16:23:38.371236 kubelet[2299]: I0625 16:23:38.371009 2299 scope.go:117] "RemoveContainer" containerID="f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4" Jun 25 16:23:38.371348 containerd[1286]: time="2024-06-25T16:23:38.371269045Z" level=error msg="ContainerStatus for \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\": not found" Jun 25 16:23:38.371476 kubelet[2299]: E0625 16:23:38.371447 2299 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\": not found" containerID="f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4" Jun 25 16:23:38.371533 kubelet[2299]: I0625 16:23:38.371483 2299 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4"} err="failed to get container status \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"f00dad3b090902bb5683ce831b593cf45ecbdf1985f5b665988507bff4f013c4\": not found" Jun 25 16:23:38.617228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-753f1088822acd578a2f50f275c2384cea2df3999e235923f1644a085000d410-rootfs.mount: Deactivated successfully. Jun 25 16:23:38.617323 systemd[1]: var-lib-kubelet-pods-ecd508d0\x2ded8b\x2d40f4\x2dbf75\x2d10322e63f686-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jun 25 16:23:38.617378 systemd[1]: var-lib-kubelet-pods-ecd508d0\x2ded8b\x2d40f4\x2dbf75\x2d10322e63f686-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmwfj9.mount: Deactivated successfully. Jun 25 16:23:38.617441 systemd[1]: var-lib-kubelet-pods-ecd508d0\x2ded8b\x2d40f4\x2dbf75\x2d10322e63f686-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jun 25 16:23:39.008778 kubelet[2299]: E0625 16:23:39.008737 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:39.009528 containerd[1286]: time="2024-06-25T16:23:39.009479580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:23:39.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.90:22-10.0.0.1:55544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:39.257364 systemd[1]: Started sshd@8-10.0.0.90:22-10.0.0.1:55544.service - OpenSSH per-connection server daemon (10.0.0.1:55544). Jun 25 16:23:39.265236 kernel: kauditd_printk_skb: 62 callbacks suppressed Jun 25 16:23:39.265302 kernel: audit: type=1130 audit(1719332619.256:524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.90:22-10.0.0.1:55544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:39.294000 audit[3174]: USER_ACCT pid=3174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.295798 sshd[3174]: Accepted publickey for core from 10.0.0.1 port 55544 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:23:39.296835 sshd[3174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:39.301963 systemd-logind[1277]: New session 9 of user core. Jun 25 16:23:39.295000 audit[3174]: CRED_ACQ pid=3174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.306010 kernel: audit: type=1101 audit(1719332619.294:525): pid=3174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.306106 kernel: audit: type=1103 audit(1719332619.295:526): pid=3174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.306141 kernel: audit: type=1006 audit(1719332619.295:527): pid=3174 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jun 25 16:23:39.295000 audit[3174]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff72f83d0 a2=3 a3=7ff027745480 items=0 ppid=1 pid=3174 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.312707 kernel: audit: type=1300 audit(1719332619.295:527): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff72f83d0 a2=3 a3=7ff027745480 items=0 ppid=1 pid=3174 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.312767 kernel: audit: type=1327 audit(1719332619.295:527): proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:39.295000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:39.323451 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:23:39.327000 audit[3174]: USER_START pid=3174 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.329000 audit[3176]: CRED_ACQ pid=3176 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.335789 kernel: audit: type=1105 audit(1719332619.327:528): pid=3174 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.335856 kernel: audit: type=1103 audit(1719332619.329:529): pid=3176 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.511067 sshd[3174]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:39.510000 audit[3174]: USER_END pid=3174 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.513397 systemd[1]: sshd@8-10.0.0.90:22-10.0.0.1:55544.service: Deactivated successfully. Jun 25 16:23:39.514050 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:23:39.514631 systemd-logind[1277]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:23:39.515353 systemd-logind[1277]: Removed session 9. Jun 25 16:23:39.511000 audit[3174]: CRED_DISP pid=3174 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.526126 kernel: audit: type=1106 audit(1719332619.510:530): pid=3174 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.526221 kernel: audit: type=1104 audit(1719332619.511:531): pid=3174 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:39.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.90:22-10.0.0.1:55544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:39.919376 kubelet[2299]: E0625 16:23:39.919313 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m25c" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" Jun 25 16:23:39.921486 kubelet[2299]: I0625 16:23:39.921453 2299 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecd508d0-ed8b-40f4-bf75-10322e63f686" path="/var/lib/kubelet/pods/ecd508d0-ed8b-40f4-bf75-10322e63f686/volumes" Jun 25 16:23:41.919576 kubelet[2299]: E0625 16:23:41.919495 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m25c" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" Jun 25 16:23:43.967158 kubelet[2299]: E0625 16:23:43.967094 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8m25c" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" Jun 25 16:23:44.045873 containerd[1286]: time="2024-06-25T16:23:44.045802111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:44.046646 containerd[1286]: time="2024-06-25T16:23:44.046585574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:23:44.047913 containerd[1286]: time="2024-06-25T16:23:44.047884452Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:44.049680 containerd[1286]: time="2024-06-25T16:23:44.049652878Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:44.051420 containerd[1286]: time="2024-06-25T16:23:44.051360709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:44.052044 containerd[1286]: time="2024-06-25T16:23:44.052004859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.042460427s" Jun 25 16:23:44.052095 containerd[1286]: time="2024-06-25T16:23:44.052042800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:23:44.059257 containerd[1286]: time="2024-06-25T16:23:44.059230702Z" level=info msg="CreateContainer within sandbox \"80fb8189d71ccf3b6a41f0e9434ddcc1db9a45a07436cd62cebfcac13fbd6163\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:23:44.077145 containerd[1286]: time="2024-06-25T16:23:44.077061414Z" level=info msg="CreateContainer within sandbox \"80fb8189d71ccf3b6a41f0e9434ddcc1db9a45a07436cd62cebfcac13fbd6163\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7994d159d03c3cdc721d869034b45753e8cbc32b059546e532ff0f1a83afbdea\"" Jun 25 16:23:44.077582 containerd[1286]: time="2024-06-25T16:23:44.077559777Z" level=info msg="StartContainer for \"7994d159d03c3cdc721d869034b45753e8cbc32b059546e532ff0f1a83afbdea\"" Jun 25 16:23:44.128270 systemd[1]: Started cri-containerd-7994d159d03c3cdc721d869034b45753e8cbc32b059546e532ff0f1a83afbdea.scope - libcontainer container 7994d159d03c3cdc721d869034b45753e8cbc32b059546e532ff0f1a83afbdea. Jun 25 16:23:44.140000 audit: BPF prog-id=139 op=LOAD Jun 25 16:23:44.140000 audit[3202]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3008 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:44.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739393464313539643033633363646337323164383639303334623435 Jun 25 16:23:44.140000 audit: BPF prog-id=140 op=LOAD Jun 25 16:23:44.140000 audit[3202]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3008 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:44.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739393464313539643033633363646337323164383639303334623435 Jun 25 16:23:44.141000 audit: BPF prog-id=140 op=UNLOAD Jun 25 16:23:44.141000 audit: BPF prog-id=139 op=UNLOAD Jun 25 16:23:44.141000 audit: BPF prog-id=141 op=LOAD Jun 25 16:23:44.141000 audit[3202]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3008 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:44.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739393464313539643033633363646337323164383639303334623435 Jun 25 16:23:44.268998 containerd[1286]: time="2024-06-25T16:23:44.268271150Z" level=info msg="StartContainer for \"7994d159d03c3cdc721d869034b45753e8cbc32b059546e532ff0f1a83afbdea\" returns successfully" Jun 25 16:23:44.521873 systemd[1]: Started sshd@9-10.0.0.90:22-10.0.0.1:55548.service - OpenSSH per-connection server daemon (10.0.0.1:55548). Jun 25 16:23:44.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.90:22-10.0.0.1:55548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:44.525876 kernel: kauditd_printk_skb: 12 callbacks suppressed Jun 25 16:23:44.525961 kernel: audit: type=1130 audit(1719332624.520:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.90:22-10.0.0.1:55548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:44.662000 audit[3231]: USER_ACCT pid=3231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.664017 sshd[3231]: Accepted publickey for core from 10.0.0.1 port 55548 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:23:44.665719 sshd[3231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:44.663000 audit[3231]: CRED_ACQ pid=3231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.671321 kernel: audit: type=1101 audit(1719332624.662:539): pid=3231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.671374 kernel: audit: type=1103 audit(1719332624.663:540): pid=3231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.671395 kernel: audit: type=1006 audit(1719332624.664:541): pid=3231 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:23:44.671022 systemd-logind[1277]: New session 10 of user core. Jun 25 16:23:44.687414 kernel: audit: type=1300 audit(1719332624.664:541): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff253dc9d0 a2=3 a3=7f50fce3a480 items=0 ppid=1 pid=3231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:44.687445 kernel: audit: type=1327 audit(1719332624.664:541): proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:44.664000 audit[3231]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff253dc9d0 a2=3 a3=7f50fce3a480 items=0 ppid=1 pid=3231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:44.664000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:44.687350 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:23:44.690000 audit[3231]: USER_START pid=3231 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.692000 audit[3233]: CRED_ACQ pid=3233 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.698016 kernel: audit: type=1105 audit(1719332624.690:542): pid=3231 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.698142 kernel: audit: type=1103 audit(1719332624.692:543): pid=3233 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.795232 sshd[3231]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:44.795000 audit[3231]: USER_END pid=3231 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.798402 systemd[1]: sshd@9-10.0.0.90:22-10.0.0.1:55548.service: Deactivated successfully. Jun 25 16:23:44.799427 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:23:44.800155 systemd-logind[1277]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:23:44.795000 audit[3231]: CRED_DISP pid=3231 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.801083 systemd-logind[1277]: Removed session 10. Jun 25 16:23:44.803827 kernel: audit: type=1106 audit(1719332624.795:544): pid=3231 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.803889 kernel: audit: type=1104 audit(1719332624.795:545): pid=3231 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:44.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.90:22-10.0.0.1:55548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:45.036403 kubelet[2299]: E0625 16:23:45.036367 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:45.227989 systemd[1]: cri-containerd-7994d159d03c3cdc721d869034b45753e8cbc32b059546e532ff0f1a83afbdea.scope: Deactivated successfully. Jun 25 16:23:45.232000 audit: BPF prog-id=141 op=UNLOAD Jun 25 16:23:45.244255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7994d159d03c3cdc721d869034b45753e8cbc32b059546e532ff0f1a83afbdea-rootfs.mount: Deactivated successfully. Jun 25 16:23:45.270718 kubelet[2299]: I0625 16:23:45.270689 2299 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 16:23:45.294173 kubelet[2299]: I0625 16:23:45.294112 2299 topology_manager.go:215] "Topology Admit Handler" podUID="d669eea6-8c43-4a18-a92b-b250a05611e1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rzb9h" Jun 25 16:23:45.299889 kubelet[2299]: E0625 16:23:45.294209 2299 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecd508d0-ed8b-40f4-bf75-10322e63f686" containerName="calico-typha" Jun 25 16:23:45.299889 kubelet[2299]: I0625 16:23:45.294242 2299 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd508d0-ed8b-40f4-bf75-10322e63f686" containerName="calico-typha" Jun 25 16:23:45.299889 kubelet[2299]: I0625 16:23:45.294436 2299 topology_manager.go:215] "Topology Admit Handler" podUID="836f38ef-e93b-478c-ac66-9060ef4334b9" podNamespace="calico-system" podName="calico-kube-controllers-55495db4d7-r5s6p" Jun 25 16:23:45.299889 kubelet[2299]: I0625 16:23:45.294797 2299 topology_manager.go:215] "Topology Admit Handler" podUID="7a782fdb-c775-468b-a146-70b65f402d66" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7p2pf" Jun 25 16:23:45.301467 systemd[1]: Created slice kubepods-burstable-podd669eea6_8c43_4a18_a92b_b250a05611e1.slice - libcontainer container kubepods-burstable-podd669eea6_8c43_4a18_a92b_b250a05611e1.slice. Jun 25 16:23:45.306763 systemd[1]: Created slice kubepods-burstable-pod7a782fdb_c775_468b_a146_70b65f402d66.slice - libcontainer container kubepods-burstable-pod7a782fdb_c775_468b_a146_70b65f402d66.slice. Jun 25 16:23:45.310058 systemd[1]: Created slice kubepods-besteffort-pod836f38ef_e93b_478c_ac66_9060ef4334b9.slice - libcontainer container kubepods-besteffort-pod836f38ef_e93b_478c_ac66_9060ef4334b9.slice. Jun 25 16:23:45.547713 kubelet[2299]: I0625 16:23:45.547545 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d669eea6-8c43-4a18-a92b-b250a05611e1-config-volume\") pod \"coredns-7db6d8ff4d-rzb9h\" (UID: \"d669eea6-8c43-4a18-a92b-b250a05611e1\") " pod="kube-system/coredns-7db6d8ff4d-rzb9h" Jun 25 16:23:45.547713 kubelet[2299]: I0625 16:23:45.547608 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67jp9\" (UniqueName: \"kubernetes.io/projected/836f38ef-e93b-478c-ac66-9060ef4334b9-kube-api-access-67jp9\") pod \"calico-kube-controllers-55495db4d7-r5s6p\" (UID: \"836f38ef-e93b-478c-ac66-9060ef4334b9\") " pod="calico-system/calico-kube-controllers-55495db4d7-r5s6p" Jun 25 16:23:45.547713 kubelet[2299]: I0625 16:23:45.547629 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68fc\" (UniqueName: \"kubernetes.io/projected/d669eea6-8c43-4a18-a92b-b250a05611e1-kube-api-access-x68fc\") pod \"coredns-7db6d8ff4d-rzb9h\" (UID: \"d669eea6-8c43-4a18-a92b-b250a05611e1\") " pod="kube-system/coredns-7db6d8ff4d-rzb9h" Jun 25 16:23:45.547713 kubelet[2299]: I0625 16:23:45.547645 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/836f38ef-e93b-478c-ac66-9060ef4334b9-tigera-ca-bundle\") pod \"calico-kube-controllers-55495db4d7-r5s6p\" (UID: \"836f38ef-e93b-478c-ac66-9060ef4334b9\") " pod="calico-system/calico-kube-controllers-55495db4d7-r5s6p" Jun 25 16:23:45.547713 kubelet[2299]: I0625 16:23:45.547659 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8lwt\" (UniqueName: \"kubernetes.io/projected/7a782fdb-c775-468b-a146-70b65f402d66-kube-api-access-s8lwt\") pod \"coredns-7db6d8ff4d-7p2pf\" (UID: \"7a782fdb-c775-468b-a146-70b65f402d66\") " pod="kube-system/coredns-7db6d8ff4d-7p2pf" Jun 25 16:23:45.548051 kubelet[2299]: I0625 16:23:45.547707 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a782fdb-c775-468b-a146-70b65f402d66-config-volume\") pod \"coredns-7db6d8ff4d-7p2pf\" (UID: \"7a782fdb-c775-468b-a146-70b65f402d66\") " pod="kube-system/coredns-7db6d8ff4d-7p2pf" Jun 25 16:23:45.614208 containerd[1286]: time="2024-06-25T16:23:45.614125612Z" level=info msg="shim disconnected" id=7994d159d03c3cdc721d869034b45753e8cbc32b059546e532ff0f1a83afbdea namespace=k8s.io Jun 25 16:23:45.614208 containerd[1286]: time="2024-06-25T16:23:45.614190755Z" level=warning msg="cleaning up after shim disconnected" id=7994d159d03c3cdc721d869034b45753e8cbc32b059546e532ff0f1a83afbdea namespace=k8s.io Jun 25 16:23:45.614208 containerd[1286]: time="2024-06-25T16:23:45.614200904Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:23:45.847625 kubelet[2299]: E0625 16:23:45.847499 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:45.847625 kubelet[2299]: E0625 16:23:45.847610 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:45.848192 containerd[1286]: time="2024-06-25T16:23:45.848151734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7p2pf,Uid:7a782fdb-c775-468b-a146-70b65f402d66,Namespace:kube-system,Attempt:0,}" Jun 25 16:23:45.848300 containerd[1286]: time="2024-06-25T16:23:45.848151855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55495db4d7-r5s6p,Uid:836f38ef-e93b-478c-ac66-9060ef4334b9,Namespace:calico-system,Attempt:0,}" Jun 25 16:23:45.848424 containerd[1286]: time="2024-06-25T16:23:45.848278714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzb9h,Uid:d669eea6-8c43-4a18-a92b-b250a05611e1,Namespace:kube-system,Attempt:0,}" Jun 25 16:23:45.925145 systemd[1]: Created slice kubepods-besteffort-pod85542427_f47c_46c9_a170_591e5c3b27fa.slice - libcontainer container kubepods-besteffort-pod85542427_f47c_46c9_a170_591e5c3b27fa.slice. Jun 25 16:23:45.927410 containerd[1286]: time="2024-06-25T16:23:45.927372756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8m25c,Uid:85542427-f47c-46c9-a170-591e5c3b27fa,Namespace:calico-system,Attempt:0,}" Jun 25 16:23:46.025903 containerd[1286]: time="2024-06-25T16:23:46.025808857Z" level=error msg="Failed to destroy network for sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.026268 containerd[1286]: time="2024-06-25T16:23:46.026227980Z" level=error msg="encountered an error cleaning up failed sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.026341 containerd[1286]: time="2024-06-25T16:23:46.026303933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55495db4d7-r5s6p,Uid:836f38ef-e93b-478c-ac66-9060ef4334b9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.026644 kubelet[2299]: E0625 16:23:46.026580 2299 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.026707 kubelet[2299]: E0625 16:23:46.026687 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55495db4d7-r5s6p" Jun 25 16:23:46.026735 kubelet[2299]: E0625 16:23:46.026717 2299 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55495db4d7-r5s6p" Jun 25 16:23:46.026811 kubelet[2299]: E0625 16:23:46.026771 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55495db4d7-r5s6p_calico-system(836f38ef-e93b-478c-ac66-9060ef4334b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55495db4d7-r5s6p_calico-system(836f38ef-e93b-478c-ac66-9060ef4334b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55495db4d7-r5s6p" podUID="836f38ef-e93b-478c-ac66-9060ef4334b9" Jun 25 16:23:46.032292 containerd[1286]: time="2024-06-25T16:23:46.032218202Z" level=error msg="Failed to destroy network for sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.032771 containerd[1286]: time="2024-06-25T16:23:46.032702048Z" level=error msg="encountered an error cleaning up failed sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.033007 containerd[1286]: time="2024-06-25T16:23:46.032977659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8m25c,Uid:85542427-f47c-46c9-a170-591e5c3b27fa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.033242 kubelet[2299]: E0625 16:23:46.033208 2299 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.033333 kubelet[2299]: E0625 16:23:46.033256 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8m25c" Jun 25 16:23:46.033333 kubelet[2299]: E0625 16:23:46.033294 2299 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8m25c" Jun 25 16:23:46.033394 kubelet[2299]: E0625 16:23:46.033351 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8m25c_calico-system(85542427-f47c-46c9-a170-591e5c3b27fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8m25c_calico-system(85542427-f47c-46c9-a170-591e5c3b27fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8m25c" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" Jun 25 16:23:46.037029 containerd[1286]: time="2024-06-25T16:23:46.036891414Z" level=error msg="Failed to destroy network for sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.037531 containerd[1286]: time="2024-06-25T16:23:46.037492330Z" level=error msg="encountered an error cleaning up failed sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.037611 containerd[1286]: time="2024-06-25T16:23:46.037562583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7p2pf,Uid:7a782fdb-c775-468b-a146-70b65f402d66,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.037917 kubelet[2299]: E0625 16:23:46.037876 2299 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.038286 kubelet[2299]: E0625 16:23:46.037929 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7p2pf" Jun 25 16:23:46.038286 kubelet[2299]: E0625 16:23:46.037954 2299 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7p2pf" Jun 25 16:23:46.038286 kubelet[2299]: E0625 16:23:46.038004 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7p2pf_kube-system(7a782fdb-c775-468b-a146-70b65f402d66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7p2pf_kube-system(7a782fdb-c775-468b-a146-70b65f402d66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7p2pf" podUID="7a782fdb-c775-468b-a146-70b65f402d66" Jun 25 16:23:46.038512 kubelet[2299]: I0625 16:23:46.038486 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:23:46.039130 containerd[1286]: time="2024-06-25T16:23:46.039102487Z" level=info msg="StopPodSandbox for \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\"" Jun 25 16:23:46.039441 containerd[1286]: time="2024-06-25T16:23:46.039419917Z" level=info msg="Ensure that sandbox ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea in task-service has been cleanup successfully" Jun 25 16:23:46.040105 kubelet[2299]: I0625 16:23:46.040081 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:23:46.041324 containerd[1286]: time="2024-06-25T16:23:46.041287670Z" level=info msg="StopPodSandbox for \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\"" Jun 25 16:23:46.041513 containerd[1286]: time="2024-06-25T16:23:46.041489332Z" level=info msg="Ensure that sandbox 41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc in task-service has been cleanup successfully" Jun 25 16:23:46.045932 containerd[1286]: time="2024-06-25T16:23:46.045862986Z" level=error msg="Failed to destroy network for sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.046472 containerd[1286]: time="2024-06-25T16:23:46.046435429Z" level=error msg="encountered an error cleaning up failed sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.046532 containerd[1286]: time="2024-06-25T16:23:46.046497276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzb9h,Uid:d669eea6-8c43-4a18-a92b-b250a05611e1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.046879 kubelet[2299]: E0625 16:23:46.046841 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:46.048503 kubelet[2299]: E0625 16:23:46.048216 2299 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.048503 kubelet[2299]: E0625 16:23:46.048263 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rzb9h" Jun 25 16:23:46.048503 kubelet[2299]: E0625 16:23:46.048289 2299 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rzb9h" Jun 25 16:23:46.048670 kubelet[2299]: E0625 16:23:46.048319 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rzb9h_kube-system(d669eea6-8c43-4a18-a92b-b250a05611e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rzb9h_kube-system(d669eea6-8c43-4a18-a92b-b250a05611e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rzb9h" podUID="d669eea6-8c43-4a18-a92b-b250a05611e1" Jun 25 16:23:46.051016 containerd[1286]: time="2024-06-25T16:23:46.050966972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:23:46.071670 containerd[1286]: time="2024-06-25T16:23:46.071578865Z" level=error msg="StopPodSandbox for \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\" failed" error="failed to destroy network for sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.071939 kubelet[2299]: E0625 16:23:46.071881 2299 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:23:46.072010 kubelet[2299]: E0625 16:23:46.071946 2299 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea"} Jun 25 16:23:46.072010 kubelet[2299]: E0625 16:23:46.071982 2299 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"836f38ef-e93b-478c-ac66-9060ef4334b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:23:46.072119 kubelet[2299]: E0625 16:23:46.072007 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"836f38ef-e93b-478c-ac66-9060ef4334b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55495db4d7-r5s6p" podUID="836f38ef-e93b-478c-ac66-9060ef4334b9" Jun 25 16:23:46.081791 containerd[1286]: time="2024-06-25T16:23:46.081717475Z" level=error msg="StopPodSandbox for \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\" failed" error="failed to destroy network for sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:46.082112 kubelet[2299]: E0625 16:23:46.082054 2299 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:23:46.082170 kubelet[2299]: E0625 16:23:46.082121 2299 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc"} Jun 25 16:23:46.082170 kubelet[2299]: E0625 16:23:46.082155 2299 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85542427-f47c-46c9-a170-591e5c3b27fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:23:46.082253 kubelet[2299]: E0625 16:23:46.082192 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85542427-f47c-46c9-a170-591e5c3b27fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8m25c" podUID="85542427-f47c-46c9-a170-591e5c3b27fa" Jun 25 16:23:46.245209 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052-shm.mount: Deactivated successfully. Jun 25 16:23:47.048009 kubelet[2299]: I0625 16:23:47.047967 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:23:47.048520 containerd[1286]: time="2024-06-25T16:23:47.048487995Z" level=info msg="StopPodSandbox for \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\"" Jun 25 16:23:47.048701 kubelet[2299]: I0625 16:23:47.048563 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:23:47.048899 containerd[1286]: time="2024-06-25T16:23:47.048874056Z" level=info msg="StopPodSandbox for \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\"" Jun 25 16:23:47.048949 containerd[1286]: time="2024-06-25T16:23:47.048887050Z" level=info msg="Ensure that sandbox 1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d in task-service has been cleanup successfully" Jun 25 16:23:47.049112 containerd[1286]: time="2024-06-25T16:23:47.049084534Z" level=info msg="Ensure that sandbox 01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052 in task-service has been cleanup successfully" Jun 25 16:23:47.073454 containerd[1286]: time="2024-06-25T16:23:47.073375842Z" level=error msg="StopPodSandbox for \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\" failed" error="failed to destroy network for sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:47.073659 containerd[1286]: time="2024-06-25T16:23:47.073424314Z" level=error msg="StopPodSandbox for \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\" failed" error="failed to destroy network for sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:23:47.073812 kubelet[2299]: E0625 16:23:47.073740 2299 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:23:47.073891 kubelet[2299]: E0625 16:23:47.073823 2299 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052"} Jun 25 16:23:47.073891 kubelet[2299]: E0625 16:23:47.073868 2299 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a782fdb-c775-468b-a146-70b65f402d66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:23:47.074008 kubelet[2299]: E0625 16:23:47.073900 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a782fdb-c775-468b-a146-70b65f402d66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7p2pf" podUID="7a782fdb-c775-468b-a146-70b65f402d66" Jun 25 16:23:47.074008 kubelet[2299]: E0625 16:23:47.073897 2299 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:23:47.074008 kubelet[2299]: E0625 16:23:47.073959 2299 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d"} Jun 25 16:23:47.074144 kubelet[2299]: E0625 16:23:47.074010 2299 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d669eea6-8c43-4a18-a92b-b250a05611e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:23:47.074144 kubelet[2299]: E0625 16:23:47.074044 2299 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d669eea6-8c43-4a18-a92b-b250a05611e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rzb9h" podUID="d669eea6-8c43-4a18-a92b-b250a05611e1" Jun 25 16:23:48.672000 audit[3533]: NETFILTER_CFG table=filter:101 family=2 entries=15 op=nft_register_rule pid=3533 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:48.672000 audit[3533]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd5a191a40 a2=0 a3=7ffd5a191a2c items=0 ppid=2502 pid=3533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:48.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:48.672000 audit[3533]: NETFILTER_CFG table=nat:102 family=2 entries=19 op=nft_register_chain pid=3533 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:23:48.672000 audit[3533]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd5a191a40 a2=0 a3=7ffd5a191a2c items=0 ppid=2502 pid=3533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:48.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:23:49.684494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923644849.mount: Deactivated successfully. Jun 25 16:23:49.806708 systemd[1]: Started sshd@10-10.0.0.90:22-10.0.0.1:47946.service - OpenSSH per-connection server daemon (10.0.0.1:47946). Jun 25 16:23:49.807749 kernel: kauditd_printk_skb: 8 callbacks suppressed Jun 25 16:23:49.807785 kernel: audit: type=1130 audit(1719332629.805:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.90:22-10.0.0.1:47946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:49.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.90:22-10.0.0.1:47946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:50.796000 audit[3535]: USER_ACCT pid=3535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:50.797938 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 47946 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:23:50.814001 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:50.817509 systemd-logind[1277]: New session 11 of user core. Jun 25 16:23:50.797000 audit[3535]: CRED_ACQ pid=3535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:50.821283 kernel: audit: type=1101 audit(1719332630.796:551): pid=3535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:50.821340 kernel: audit: type=1103 audit(1719332630.797:552): pid=3535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:50.821358 kernel: audit: type=1006 audit(1719332630.797:553): pid=3535 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:23:50.823455 kernel: audit: type=1300 audit(1719332630.797:553): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3e7f4a40 a2=3 a3=7fd7e4451480 items=0 ppid=1 pid=3535 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:50.797000 audit[3535]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3e7f4a40 a2=3 a3=7fd7e4451480 items=0 ppid=1 pid=3535 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:50.835823 kernel: audit: type=1327 audit(1719332630.797:553): proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:50.797000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:50.838250 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:23:50.841000 audit[3535]: USER_START pid=3535 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:50.842000 audit[3537]: CRED_ACQ pid=3537 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:50.848860 kernel: audit: type=1105 audit(1719332630.841:554): pid=3535 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:50.848916 kernel: audit: type=1103 audit(1719332630.842:555): pid=3537 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:51.051338 sshd[3535]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:51.051000 audit[3535]: USER_END pid=3535 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:51.053697 systemd[1]: sshd@10-10.0.0.90:22-10.0.0.1:47946.service: Deactivated successfully. Jun 25 16:23:51.054681 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:23:51.055377 systemd-logind[1277]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:23:51.051000 audit[3535]: CRED_DISP pid=3535 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:51.056214 systemd-logind[1277]: Removed session 11. Jun 25 16:23:51.060160 kernel: audit: type=1106 audit(1719332631.051:556): pid=3535 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:51.060215 kernel: audit: type=1104 audit(1719332631.051:557): pid=3535 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:51.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.90:22-10.0.0.1:47946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:51.173699 containerd[1286]: time="2024-06-25T16:23:51.173623919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:51.185640 containerd[1286]: time="2024-06-25T16:23:51.185555237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:23:51.220585 containerd[1286]: time="2024-06-25T16:23:51.220509446Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:51.311528 containerd[1286]: time="2024-06-25T16:23:51.311391929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 5.260366227s" Jun 25 16:23:51.311528 containerd[1286]: time="2024-06-25T16:23:51.311449288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:23:51.316959 containerd[1286]: time="2024-06-25T16:23:51.316889975Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:51.317645 containerd[1286]: time="2024-06-25T16:23:51.317605036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:51.323826 containerd[1286]: time="2024-06-25T16:23:51.323767899Z" level=info msg="CreateContainer within sandbox \"80fb8189d71ccf3b6a41f0e9434ddcc1db9a45a07436cd62cebfcac13fbd6163\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:23:51.347633 containerd[1286]: time="2024-06-25T16:23:51.347577216Z" level=info msg="CreateContainer within sandbox \"80fb8189d71ccf3b6a41f0e9434ddcc1db9a45a07436cd62cebfcac13fbd6163\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4fe1f83ad0b02fa52ade353fa4f04db337ce943316bcde0c97635577d505e84a\"" Jun 25 16:23:51.348204 containerd[1286]: time="2024-06-25T16:23:51.348148265Z" level=info msg="StartContainer for \"4fe1f83ad0b02fa52ade353fa4f04db337ce943316bcde0c97635577d505e84a\"" Jun 25 16:23:51.411230 systemd[1]: Started cri-containerd-4fe1f83ad0b02fa52ade353fa4f04db337ce943316bcde0c97635577d505e84a.scope - libcontainer container 4fe1f83ad0b02fa52ade353fa4f04db337ce943316bcde0c97635577d505e84a. Jun 25 16:23:51.423000 audit: BPF prog-id=142 op=LOAD Jun 25 16:23:51.423000 audit[3560]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3008 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:51.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466653166383361643062303266613532616465333533666134663034 Jun 25 16:23:51.423000 audit: BPF prog-id=143 op=LOAD Jun 25 16:23:51.423000 audit[3560]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3008 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:51.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466653166383361643062303266613532616465333533666134663034 Jun 25 16:23:51.423000 audit: BPF prog-id=143 op=UNLOAD Jun 25 16:23:51.423000 audit: BPF prog-id=142 op=UNLOAD Jun 25 16:23:51.423000 audit: BPF prog-id=144 op=LOAD Jun 25 16:23:51.423000 audit[3560]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3008 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:51.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466653166383361643062303266613532616465333533666134663034 Jun 25 16:23:51.451654 containerd[1286]: time="2024-06-25T16:23:51.451586758Z" level=info msg="StartContainer for \"4fe1f83ad0b02fa52ade353fa4f04db337ce943316bcde0c97635577d505e84a\" returns successfully" Jun 25 16:23:51.501800 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:23:51.501913 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:23:52.058934 kubelet[2299]: E0625 16:23:52.058899 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:52.070220 kubelet[2299]: I0625 16:23:52.069893 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gqfmt" podStartSLOduration=4.766853738 podStartE2EDuration="17.069866259s" podCreationTimestamp="2024-06-25 16:23:35 +0000 UTC" firstStartedPulling="2024-06-25 16:23:39.009229436 +0000 UTC m=+35.166459758" lastFinishedPulling="2024-06-25 16:23:51.312241947 +0000 UTC m=+47.469472279" observedRunningTime="2024-06-25 16:23:52.06916877 +0000 UTC m=+48.226399132" watchObservedRunningTime="2024-06-25 16:23:52.069866259 +0000 UTC m=+48.227096591" Jun 25 16:23:52.752000 audit[3674]: AVC avc: denied { write } for pid=3674 comm="tee" name="fd" dev="proc" ino=24266 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:52.752000 audit[3674]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdaa223a16 a2=241 a3=1b6 items=1 ppid=3650 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:52.752000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:23:52.752000 audit: PATH item=0 name="/dev/fd/63" inode=25137 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:52.752000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:52.756000 audit[3708]: AVC avc: denied { write } for pid=3708 comm="tee" name="fd" dev="proc" ino=26911 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:52.756000 audit[3708]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd0f15aa27 a2=241 a3=1b6 items=1 ppid=3656 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:52.756000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:23:52.756000 audit: PATH item=0 name="/dev/fd/63" inode=26908 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:52.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:52.762000 audit[3705]: AVC avc: denied { write } for pid=3705 comm="tee" name="fd" dev="proc" ino=26920 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:52.762000 audit[3705]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffd5f4ea26 a2=241 a3=1b6 items=1 ppid=3658 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:52.762000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:23:52.762000 audit: PATH item=0 name="/dev/fd/63" inode=24268 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:52.762000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:52.789000 audit[3696]: AVC avc: denied { write } for pid=3696 comm="tee" name="fd" dev="proc" ino=24278 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:52.792000 audit[3711]: AVC avc: denied { write } for pid=3711 comm="tee" name="fd" dev="proc" ino=24283 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:52.797000 audit[3723]: AVC avc: denied { write } for pid=3723 comm="tee" name="fd" dev="proc" ino=26190 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:52.797000 audit[3723]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff97a5ca26 a2=241 a3=1b6 items=1 ppid=3655 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:52.797000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:23:52.797000 audit: PATH item=0 name="/dev/fd/63" inode=24275 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:52.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:52.789000 audit[3696]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec1eeca28 a2=241 a3=1b6 items=1 ppid=3651 pid=3696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:52.789000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:23:52.789000 audit: PATH item=0 name="/dev/fd/63" inode=25144 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:52.789000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:52.792000 audit[3711]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcb8c4ca26 a2=241 a3=1b6 items=1 ppid=3657 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:52.792000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:23:52.792000 audit: PATH item=0 name="/dev/fd/63" inode=26917 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:52.792000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:52.806000 audit[3734]: AVC avc: denied { write } for pid=3734 comm="tee" name="fd" dev="proc" ino=26939 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:23:52.806000 audit[3734]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc83195a17 a2=241 a3=1b6 items=1 ppid=3649 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:52.806000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:23:52.806000 audit: PATH item=0 name="/dev/fd/63" inode=24286 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:23:52.806000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:23:52.977624 systemd-networkd[1115]: vxlan.calico: Link UP Jun 25 16:23:52.977631 systemd-networkd[1115]: vxlan.calico: Gained carrier Jun 25 16:23:52.990000 audit: BPF prog-id=145 op=LOAD Jun 25 16:23:52.990000 audit[3795]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd071c020 a2=70 a3=7fd90c5a8000 items=0 ppid=3662 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:52.990000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:23:52.990000 audit: BPF prog-id=145 op=UNLOAD Jun 25 16:23:52.990000 audit: BPF prog-id=146 op=LOAD Jun 25 16:23:52.990000 audit[3795]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd071c020 a2=70 a3=6f items=0 ppid=3662 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:52.990000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:23:52.990000 audit: BPF prog-id=146 op=UNLOAD Jun 25 16:23:52.990000 audit: BPF prog-id=147 op=LOAD Jun 25 16:23:52.990000 audit[3795]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd071bfb0 a2=70 a3=7fffd071c020 items=0 ppid=3662 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:52.990000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:23:52.990000 audit: BPF prog-id=147 op=UNLOAD Jun 25 16:23:52.991000 audit: BPF prog-id=148 op=LOAD Jun 25 16:23:52.991000 audit[3795]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffd071bfe0 a2=70 a3=0 items=0 ppid=3662 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:52.991000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:23:53.002000 audit: BPF prog-id=148 op=UNLOAD Jun 25 16:23:53.042000 audit[3829]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=3829 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:53.042000 audit[3829]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffddcfab090 a2=0 a3=7ffddcfab07c items=0 ppid=3662 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:53.042000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:53.044000 audit[3828]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=3828 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:53.044000 audit[3828]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff98090890 a2=0 a3=7fff9809087c items=0 ppid=3662 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:53.044000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:53.044000 audit[3827]: NETFILTER_CFG table=raw:105 family=2 entries=19 op=nft_register_chain pid=3827 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:53.044000 audit[3827]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffc5a442890 a2=0 a3=7ffc5a44287c items=0 ppid=3662 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:53.044000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:53.047000 audit[3832]: NETFILTER_CFG table=filter:106 family=2 entries=39 op=nft_register_chain pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:53.047000 audit[3832]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7fffe7bd7f30 a2=0 a3=7fffe7bd7f1c items=0 ppid=3662 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:53.047000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:53.060332 kubelet[2299]: E0625 16:23:53.060309 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:23:54.991263 systemd-networkd[1115]: vxlan.calico: Gained IPv6LL Jun 25 16:23:56.064136 systemd[1]: Started sshd@11-10.0.0.90:22-10.0.0.1:49462.service - OpenSSH per-connection server daemon (10.0.0.1:49462). Jun 25 16:23:56.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.90:22-10.0.0.1:49462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.065263 kernel: kauditd_printk_skb: 75 callbacks suppressed Jun 25 16:23:56.065312 kernel: audit: type=1130 audit(1719332636.064:583): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.90:22-10.0.0.1:49462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.104000 audit[3861]: USER_ACCT pid=3861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.104640 sshd[3861]: Accepted publickey for core from 10.0.0.1 port 49462 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:23:56.106049 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:56.105000 audit[3861]: CRED_ACQ pid=3861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.109409 systemd-logind[1277]: New session 12 of user core. Jun 25 16:23:56.110418 kernel: audit: type=1101 audit(1719332636.104:584): pid=3861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.110500 kernel: audit: type=1103 audit(1719332636.105:585): pid=3861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.110531 kernel: audit: type=1006 audit(1719332636.105:586): pid=3861 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 16:23:56.112245 kernel: audit: type=1300 audit(1719332636.105:586): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd73948840 a2=3 a3=7f88c90eb480 items=0 ppid=1 pid=3861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:56.105000 audit[3861]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd73948840 a2=3 a3=7f88c90eb480 items=0 ppid=1 pid=3861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:56.115376 kernel: audit: type=1327 audit(1719332636.105:586): proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:56.105000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:56.125326 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:23:56.128000 audit[3861]: USER_START pid=3861 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.130000 audit[3863]: CRED_ACQ pid=3863 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.135001 kernel: audit: type=1105 audit(1719332636.128:587): pid=3861 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.135050 kernel: audit: type=1103 audit(1719332636.130:588): pid=3863 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.232691 sshd[3861]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:56.233000 audit[3861]: USER_END pid=3861 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.233000 audit[3861]: CRED_DISP pid=3861 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.239938 kernel: audit: type=1106 audit(1719332636.233:589): pid=3861 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.239990 kernel: audit: type=1104 audit(1719332636.233:590): pid=3861 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.242643 systemd[1]: sshd@11-10.0.0.90:22-10.0.0.1:49462.service: Deactivated successfully. Jun 25 16:23:56.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.90:22-10.0.0.1:49462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.243342 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:23:56.243928 systemd-logind[1277]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:23:56.245725 systemd[1]: Started sshd@12-10.0.0.90:22-10.0.0.1:49476.service - OpenSSH per-connection server daemon (10.0.0.1:49476). Jun 25 16:23:56.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.90:22-10.0.0.1:49476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.246621 systemd-logind[1277]: Removed session 12. Jun 25 16:23:56.279000 audit[3875]: USER_ACCT pid=3875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.279805 sshd[3875]: Accepted publickey for core from 10.0.0.1 port 49476 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:23:56.280000 audit[3875]: CRED_ACQ pid=3875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.280000 audit[3875]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6fec1d40 a2=3 a3=7feabc386480 items=0 ppid=1 pid=3875 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:56.280000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:56.280921 sshd[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:56.284563 systemd-logind[1277]: New session 13 of user core. Jun 25 16:23:56.292242 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:23:56.296000 audit[3875]: USER_START pid=3875 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.297000 audit[3877]: CRED_ACQ pid=3877 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.457181 sshd[3875]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:56.458000 audit[3875]: USER_END pid=3875 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.458000 audit[3875]: CRED_DISP pid=3875 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.464405 systemd[1]: sshd@12-10.0.0.90:22-10.0.0.1:49476.service: Deactivated successfully. Jun 25 16:23:56.465062 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:23:56.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.90:22-10.0.0.1:49476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.466681 systemd-logind[1277]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:23:56.470657 systemd[1]: Started sshd@13-10.0.0.90:22-10.0.0.1:49478.service - OpenSSH per-connection server daemon (10.0.0.1:49478). Jun 25 16:23:56.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.90:22-10.0.0.1:49478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.472333 systemd-logind[1277]: Removed session 13. Jun 25 16:23:56.512000 audit[3886]: USER_ACCT pid=3886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.512704 sshd[3886]: Accepted publickey for core from 10.0.0.1 port 49478 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:23:56.513000 audit[3886]: CRED_ACQ pid=3886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.513000 audit[3886]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff41056240 a2=3 a3=7f6511bab480 items=0 ppid=1 pid=3886 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:56.513000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:56.514214 sshd[3886]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:56.518694 systemd-logind[1277]: New session 14 of user core. Jun 25 16:23:56.525274 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:23:56.530000 audit[3886]: USER_START pid=3886 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.532000 audit[3888]: CRED_ACQ pid=3888 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.642344 sshd[3886]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:56.643000 audit[3886]: USER_END pid=3886 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.643000 audit[3886]: CRED_DISP pid=3886 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:56.644955 systemd[1]: sshd@13-10.0.0.90:22-10.0.0.1:49478.service: Deactivated successfully. Jun 25 16:23:56.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.90:22-10.0.0.1:49478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.645949 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:23:56.646584 systemd-logind[1277]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:23:56.647333 systemd-logind[1277]: Removed session 14. Jun 25 16:23:56.920053 containerd[1286]: time="2024-06-25T16:23:56.919994716Z" level=info msg="StopPodSandbox for \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\"" Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.958 [INFO][3916] k8s.go 608: Cleaning up netns ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.958 [INFO][3916] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" iface="eth0" netns="/var/run/netns/cni-78064502-05e3-28c3-ee2a-1dec49e68c7d" Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.958 [INFO][3916] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" iface="eth0" netns="/var/run/netns/cni-78064502-05e3-28c3-ee2a-1dec49e68c7d" Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.958 [INFO][3916] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" iface="eth0" netns="/var/run/netns/cni-78064502-05e3-28c3-ee2a-1dec49e68c7d" Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.958 [INFO][3916] k8s.go 615: Releasing IP address(es) ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.958 [INFO][3916] utils.go 188: Calico CNI releasing IP address ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.989 [INFO][3923] ipam_plugin.go 411: Releasing address using handleID ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" HandleID="k8s-pod-network.41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.989 [INFO][3923] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.989 [INFO][3923] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.994 [WARNING][3923] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" HandleID="k8s-pod-network.41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.994 [INFO][3923] ipam_plugin.go 439: Releasing address using workloadID ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" HandleID="k8s-pod-network.41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.996 [INFO][3923] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:56.998189 containerd[1286]: 2024-06-25 16:23:56.997 [INFO][3916] k8s.go 621: Teardown processing complete. ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:23:56.998645 containerd[1286]: time="2024-06-25T16:23:56.998350674Z" level=info msg="TearDown network for sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\" successfully" Jun 25 16:23:56.998645 containerd[1286]: time="2024-06-25T16:23:56.998388134Z" level=info msg="StopPodSandbox for \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\" returns successfully" Jun 25 16:23:56.999098 containerd[1286]: time="2024-06-25T16:23:56.999055886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8m25c,Uid:85542427-f47c-46c9-a170-591e5c3b27fa,Namespace:calico-system,Attempt:1,}" Jun 25 16:23:57.000490 systemd[1]: run-netns-cni\x2d78064502\x2d05e3\x2d28c3\x2dee2a\x2d1dec49e68c7d.mount: Deactivated successfully. Jun 25 16:23:57.097052 systemd-networkd[1115]: cali3afc8c10cac: Link UP Jun 25 16:23:57.098927 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:23:57.098994 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3afc8c10cac: link becomes ready Jun 25 16:23:57.098999 systemd-networkd[1115]: cali3afc8c10cac: Gained carrier Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.042 [INFO][3931] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8m25c-eth0 csi-node-driver- calico-system 85542427-f47c-46c9-a170-591e5c3b27fa 914 0 2024-06-25 16:23:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-8m25c eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali3afc8c10cac [] []}} ContainerID="73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" Namespace="calico-system" Pod="csi-node-driver-8m25c" WorkloadEndpoint="localhost-k8s-csi--node--driver--8m25c-" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.043 [INFO][3931] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" Namespace="calico-system" Pod="csi-node-driver-8m25c" WorkloadEndpoint="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.066 [INFO][3944] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" HandleID="k8s-pod-network.73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.073 [INFO][3944] ipam_plugin.go 264: Auto assigning IP ContainerID="73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" HandleID="k8s-pod-network.73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000294f10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8m25c", "timestamp":"2024-06-25 16:23:57.066411455 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.073 [INFO][3944] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.073 [INFO][3944] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.073 [INFO][3944] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.074 [INFO][3944] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" host="localhost" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.077 [INFO][3944] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.081 [INFO][3944] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.082 [INFO][3944] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.084 [INFO][3944] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.084 [INFO][3944] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" host="localhost" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.086 [INFO][3944] ipam.go 1685: Creating new handle: k8s-pod-network.73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.088 [INFO][3944] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" host="localhost" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.092 [INFO][3944] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" host="localhost" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.092 [INFO][3944] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" host="localhost" Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.092 [INFO][3944] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:23:57.110801 containerd[1286]: 2024-06-25 16:23:57.092 [INFO][3944] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" HandleID="k8s-pod-network.73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:23:57.111325 containerd[1286]: 2024-06-25 16:23:57.095 [INFO][3931] k8s.go 386: Populated endpoint ContainerID="73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" Namespace="calico-system" Pod="csi-node-driver-8m25c" WorkloadEndpoint="localhost-k8s-csi--node--driver--8m25c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8m25c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85542427-f47c-46c9-a170-591e5c3b27fa", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8m25c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3afc8c10cac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:57.111325 containerd[1286]: 2024-06-25 16:23:57.095 [INFO][3931] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" Namespace="calico-system" Pod="csi-node-driver-8m25c" WorkloadEndpoint="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:23:57.111325 containerd[1286]: 2024-06-25 16:23:57.095 [INFO][3931] dataplane_linux.go 68: Setting the host side veth name to cali3afc8c10cac ContainerID="73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" Namespace="calico-system" Pod="csi-node-driver-8m25c" WorkloadEndpoint="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:23:57.111325 containerd[1286]: 2024-06-25 16:23:57.099 [INFO][3931] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" Namespace="calico-system" Pod="csi-node-driver-8m25c" WorkloadEndpoint="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:23:57.111325 containerd[1286]: 2024-06-25 16:23:57.099 [INFO][3931] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" Namespace="calico-system" Pod="csi-node-driver-8m25c" WorkloadEndpoint="localhost-k8s-csi--node--driver--8m25c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8m25c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85542427-f47c-46c9-a170-591e5c3b27fa", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a", Pod:"csi-node-driver-8m25c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3afc8c10cac", MAC:"e6:84:e4:ad:d9:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:23:57.111325 containerd[1286]: 2024-06-25 16:23:57.108 [INFO][3931] k8s.go 500: Wrote updated endpoint to datastore ContainerID="73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a" Namespace="calico-system" Pod="csi-node-driver-8m25c" WorkloadEndpoint="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:23:57.119000 audit[3968]: NETFILTER_CFG table=filter:107 family=2 entries=34 op=nft_register_chain pid=3968 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:23:57.119000 audit[3968]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffea7dec3f0 a2=0 a3=7ffea7dec3dc items=0 ppid=3662 pid=3968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:57.119000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:23:57.142966 containerd[1286]: time="2024-06-25T16:23:57.142888131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:23:57.142966 containerd[1286]: time="2024-06-25T16:23:57.142942283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:57.142966 containerd[1286]: time="2024-06-25T16:23:57.142958714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:23:57.143179 containerd[1286]: time="2024-06-25T16:23:57.142995594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:23:57.172238 systemd[1]: Started cri-containerd-73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a.scope - libcontainer container 73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a. Jun 25 16:23:57.179000 audit: BPF prog-id=149 op=LOAD Jun 25 16:23:57.180000 audit: BPF prog-id=150 op=LOAD Jun 25 16:23:57.180000 audit[3985]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=3976 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:57.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733343135353738626364636263303434393266626431306438663263 Jun 25 16:23:57.180000 audit: BPF prog-id=151 op=LOAD Jun 25 16:23:57.180000 audit[3985]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=3976 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:57.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733343135353738626364636263303434393266626431306438663263 Jun 25 16:23:57.180000 audit: BPF prog-id=151 op=UNLOAD Jun 25 16:23:57.180000 audit: BPF prog-id=150 op=UNLOAD Jun 25 16:23:57.180000 audit: BPF prog-id=152 op=LOAD Jun 25 16:23:57.180000 audit[3985]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=3976 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:57.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733343135353738626364636263303434393266626431306438663263 Jun 25 16:23:57.181228 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:23:57.193352 containerd[1286]: time="2024-06-25T16:23:57.193300951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8m25c,Uid:85542427-f47c-46c9-a170-591e5c3b27fa,Namespace:calico-system,Attempt:1,} returns sandbox id \"73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a\"" Jun 25 16:23:57.195123 containerd[1286]: time="2024-06-25T16:23:57.195098077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:23:58.000973 systemd[1]: run-containerd-runc-k8s.io-73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a-runc.xYEPTy.mount: Deactivated successfully. Jun 25 16:23:58.319240 systemd-networkd[1115]: cali3afc8c10cac: Gained IPv6LL Jun 25 16:23:58.903580 containerd[1286]: time="2024-06-25T16:23:58.903517266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:58.905122 containerd[1286]: time="2024-06-25T16:23:58.905019980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:23:58.927671 containerd[1286]: time="2024-06-25T16:23:58.927605510Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:58.929511 containerd[1286]: time="2024-06-25T16:23:58.929472203Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:58.930859 containerd[1286]: time="2024-06-25T16:23:58.930794357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:58.931444 containerd[1286]: time="2024-06-25T16:23:58.931403870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.736272649s" Jun 25 16:23:58.931518 containerd[1286]: time="2024-06-25T16:23:58.931439417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:23:58.939192 containerd[1286]: time="2024-06-25T16:23:58.939148319Z" level=info msg="CreateContainer within sandbox \"73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:23:58.955084 containerd[1286]: time="2024-06-25T16:23:58.955018107Z" level=info msg="CreateContainer within sandbox \"73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7853607a91defcd04ca859de8c9c4a9b452b478b8bb9355b81d0471816ca1e4b\"" Jun 25 16:23:58.955622 containerd[1286]: time="2024-06-25T16:23:58.955579901Z" level=info msg="StartContainer for \"7853607a91defcd04ca859de8c9c4a9b452b478b8bb9355b81d0471816ca1e4b\"" Jun 25 16:23:58.985205 systemd[1]: Started cri-containerd-7853607a91defcd04ca859de8c9c4a9b452b478b8bb9355b81d0471816ca1e4b.scope - libcontainer container 7853607a91defcd04ca859de8c9c4a9b452b478b8bb9355b81d0471816ca1e4b. Jun 25 16:23:58.996000 audit: BPF prog-id=153 op=LOAD Jun 25 16:23:58.996000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3976 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:58.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738353336303761393164656663643034636138353964653863396334 Jun 25 16:23:58.997000 audit: BPF prog-id=154 op=LOAD Jun 25 16:23:58.997000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3976 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:58.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738353336303761393164656663643034636138353964653863396334 Jun 25 16:23:58.997000 audit: BPF prog-id=154 op=UNLOAD Jun 25 16:23:58.997000 audit: BPF prog-id=153 op=UNLOAD Jun 25 16:23:58.997000 audit: BPF prog-id=155 op=LOAD Jun 25 16:23:58.997000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3976 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:58.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738353336303761393164656663643034636138353964653863396334 Jun 25 16:23:59.011401 containerd[1286]: time="2024-06-25T16:23:59.011358532Z" level=info msg="StartContainer for \"7853607a91defcd04ca859de8c9c4a9b452b478b8bb9355b81d0471816ca1e4b\" returns successfully" Jun 25 16:23:59.012542 containerd[1286]: time="2024-06-25T16:23:59.012520157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:23:59.919606 containerd[1286]: time="2024-06-25T16:23:59.919558975Z" level=info msg="StopPodSandbox for \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\"" Jun 25 16:23:59.920061 containerd[1286]: time="2024-06-25T16:23:59.919580316Z" level=info msg="StopPodSandbox for \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\"" Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.080 [INFO][4084] k8s.go 608: Cleaning up netns ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.081 [INFO][4084] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" iface="eth0" netns="/var/run/netns/cni-986868de-0bbf-ffc5-c175-4967cdac39a5" Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.081 [INFO][4084] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" iface="eth0" netns="/var/run/netns/cni-986868de-0bbf-ffc5-c175-4967cdac39a5" Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.081 [INFO][4084] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" iface="eth0" netns="/var/run/netns/cni-986868de-0bbf-ffc5-c175-4967cdac39a5" Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.081 [INFO][4084] k8s.go 615: Releasing IP address(es) ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.081 [INFO][4084] utils.go 188: Calico CNI releasing IP address ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.130 [INFO][4103] ipam_plugin.go 411: Releasing address using handleID ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" HandleID="k8s-pod-network.1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.130 [INFO][4103] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.130 [INFO][4103] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.143 [WARNING][4103] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" HandleID="k8s-pod-network.1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.143 [INFO][4103] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" HandleID="k8s-pod-network.1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.145 [INFO][4103] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:00.147810 containerd[1286]: 2024-06-25 16:24:00.146 [INFO][4084] k8s.go 621: Teardown processing complete. ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:00.151239 containerd[1286]: time="2024-06-25T16:24:00.151188649Z" level=info msg="TearDown network for sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\" successfully" Jun 25 16:24:00.151239 containerd[1286]: time="2024-06-25T16:24:00.151235027Z" level=info msg="StopPodSandbox for \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\" returns successfully" Jun 25 16:24:00.151729 kubelet[2299]: E0625 16:24:00.151587 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:00.151926 systemd[1]: run-netns-cni\x2d986868de\x2d0bbf\x2dffc5\x2dc175\x2d4967cdac39a5.mount: Deactivated successfully. Jun 25 16:24:00.153541 containerd[1286]: time="2024-06-25T16:24:00.152798882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzb9h,Uid:d669eea6-8c43-4a18-a92b-b250a05611e1,Namespace:kube-system,Attempt:1,}" Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.105 [INFO][4089] k8s.go 608: Cleaning up netns ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.105 [INFO][4089] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" iface="eth0" netns="/var/run/netns/cni-47a86f4d-7f06-9961-b3d4-ad454631c611" Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.106 [INFO][4089] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" iface="eth0" netns="/var/run/netns/cni-47a86f4d-7f06-9961-b3d4-ad454631c611" Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.106 [INFO][4089] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" iface="eth0" netns="/var/run/netns/cni-47a86f4d-7f06-9961-b3d4-ad454631c611" Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.106 [INFO][4089] k8s.go 615: Releasing IP address(es) ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.106 [INFO][4089] utils.go 188: Calico CNI releasing IP address ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.132 [INFO][4109] ipam_plugin.go 411: Releasing address using handleID ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" HandleID="k8s-pod-network.ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.132 [INFO][4109] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.145 [INFO][4109] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.151 [WARNING][4109] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" HandleID="k8s-pod-network.ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.151 [INFO][4109] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" HandleID="k8s-pod-network.ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.154 [INFO][4109] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:00.156615 containerd[1286]: 2024-06-25 16:24:00.155 [INFO][4089] k8s.go 621: Teardown processing complete. ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:00.156925 containerd[1286]: time="2024-06-25T16:24:00.156781971Z" level=info msg="TearDown network for sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\" successfully" Jun 25 16:24:00.156925 containerd[1286]: time="2024-06-25T16:24:00.156805947Z" level=info msg="StopPodSandbox for \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\" returns successfully" Jun 25 16:24:00.157631 containerd[1286]: time="2024-06-25T16:24:00.157586953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55495db4d7-r5s6p,Uid:836f38ef-e93b-478c-ac66-9060ef4334b9,Namespace:calico-system,Attempt:1,}" Jun 25 16:24:00.158606 systemd[1]: run-netns-cni\x2d47a86f4d\x2d7f06\x2d9961\x2db3d4\x2dad454631c611.mount: Deactivated successfully. Jun 25 16:24:00.522850 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:24:00.523108 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9e34231c07e: link becomes ready Jun 25 16:24:00.523356 systemd-networkd[1115]: cali9e34231c07e: Link UP Jun 25 16:24:00.523527 systemd-networkd[1115]: cali9e34231c07e: Gained carrier Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.462 [INFO][4120] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0 calico-kube-controllers-55495db4d7- calico-system 836f38ef-e93b-478c-ac66-9060ef4334b9 932 0 2024-06-25 16:23:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55495db4d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55495db4d7-r5s6p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9e34231c07e [] []}} ContainerID="3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" Namespace="calico-system" Pod="calico-kube-controllers-55495db4d7-r5s6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.462 [INFO][4120] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" Namespace="calico-system" Pod="calico-kube-controllers-55495db4d7-r5s6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.485 [INFO][4147] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" HandleID="k8s-pod-network.3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.492 [INFO][4147] ipam_plugin.go 264: Auto assigning IP ContainerID="3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" HandleID="k8s-pod-network.3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050170), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55495db4d7-r5s6p", "timestamp":"2024-06-25 16:24:00.485540087 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.492 [INFO][4147] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.492 [INFO][4147] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.492 [INFO][4147] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.493 [INFO][4147] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" host="localhost" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.497 [INFO][4147] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.501 [INFO][4147] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.503 [INFO][4147] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.505 [INFO][4147] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.505 [INFO][4147] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" host="localhost" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.506 [INFO][4147] ipam.go 1685: Creating new handle: k8s-pod-network.3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482 Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.509 [INFO][4147] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" host="localhost" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.514 [INFO][4147] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" host="localhost" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.514 [INFO][4147] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" host="localhost" Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.515 [INFO][4147] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:00.535159 containerd[1286]: 2024-06-25 16:24:00.515 [INFO][4147] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" HandleID="k8s-pod-network.3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:00.536141 containerd[1286]: 2024-06-25 16:24:00.517 [INFO][4120] k8s.go 386: Populated endpoint ContainerID="3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" Namespace="calico-system" Pod="calico-kube-controllers-55495db4d7-r5s6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0", GenerateName:"calico-kube-controllers-55495db4d7-", Namespace:"calico-system", SelfLink:"", UID:"836f38ef-e93b-478c-ac66-9060ef4334b9", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55495db4d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55495db4d7-r5s6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9e34231c07e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:00.536141 containerd[1286]: 2024-06-25 16:24:00.518 [INFO][4120] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" Namespace="calico-system" Pod="calico-kube-controllers-55495db4d7-r5s6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:00.536141 containerd[1286]: 2024-06-25 16:24:00.518 [INFO][4120] dataplane_linux.go 68: Setting the host side veth name to cali9e34231c07e ContainerID="3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" Namespace="calico-system" Pod="calico-kube-controllers-55495db4d7-r5s6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:00.536141 containerd[1286]: 2024-06-25 16:24:00.523 [INFO][4120] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" Namespace="calico-system" Pod="calico-kube-controllers-55495db4d7-r5s6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:00.536141 containerd[1286]: 2024-06-25 16:24:00.523 [INFO][4120] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" Namespace="calico-system" Pod="calico-kube-controllers-55495db4d7-r5s6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0", GenerateName:"calico-kube-controllers-55495db4d7-", Namespace:"calico-system", SelfLink:"", UID:"836f38ef-e93b-478c-ac66-9060ef4334b9", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55495db4d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482", Pod:"calico-kube-controllers-55495db4d7-r5s6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9e34231c07e", MAC:"f2:e0:ac:56:73:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:00.536141 containerd[1286]: 2024-06-25 16:24:00.533 [INFO][4120] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482" Namespace="calico-system" Pod="calico-kube-controllers-55495db4d7-r5s6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:00.549000 audit[4178]: NETFILTER_CFG table=filter:108 family=2 entries=34 op=nft_register_chain pid=4178 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:00.549000 audit[4178]: SYSCALL arch=c000003e syscall=46 success=yes exit=18640 a0=3 a1=7ffc486d6b10 a2=0 a3=7ffc486d6afc items=0 ppid=3662 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.549000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:00.556135 systemd-networkd[1115]: cali65c3675229f: Link UP Jun 25 16:24:00.557709 systemd-networkd[1115]: cali65c3675229f: Gained carrier Jun 25 16:24:00.558180 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali65c3675229f: link becomes ready Jun 25 16:24:00.563688 containerd[1286]: time="2024-06-25T16:24:00.561538322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:00.563688 containerd[1286]: time="2024-06-25T16:24:00.561596503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:00.563688 containerd[1286]: time="2024-06-25T16:24:00.561634437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:00.563688 containerd[1286]: time="2024-06-25T16:24:00.561646840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.463 [INFO][4130] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0 coredns-7db6d8ff4d- kube-system d669eea6-8c43-4a18-a92b-b250a05611e1 931 0 2024-06-25 16:23:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-rzb9h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali65c3675229f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rzb9h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rzb9h-" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.463 [INFO][4130] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rzb9h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.490 [INFO][4151] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" HandleID="k8s-pod-network.dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.502 [INFO][4151] ipam_plugin.go 264: Auto assigning IP ContainerID="dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" HandleID="k8s-pod-network.dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dfdc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-rzb9h", "timestamp":"2024-06-25 16:24:00.490896315 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.502 [INFO][4151] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.514 [INFO][4151] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.514 [INFO][4151] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.516 [INFO][4151] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" host="localhost" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.521 [INFO][4151] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.530 [INFO][4151] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.534 [INFO][4151] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.536 [INFO][4151] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.536 [INFO][4151] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" host="localhost" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.538 [INFO][4151] ipam.go 1685: Creating new handle: k8s-pod-network.dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7 Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.546 [INFO][4151] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" host="localhost" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.551 [INFO][4151] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" host="localhost" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.551 [INFO][4151] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" host="localhost" Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.551 [INFO][4151] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:00.569643 containerd[1286]: 2024-06-25 16:24:00.551 [INFO][4151] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" HandleID="k8s-pod-network.dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:00.570342 containerd[1286]: 2024-06-25 16:24:00.553 [INFO][4130] k8s.go 386: Populated endpoint ContainerID="dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rzb9h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d669eea6-8c43-4a18-a92b-b250a05611e1", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-rzb9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65c3675229f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:00.570342 containerd[1286]: 2024-06-25 16:24:00.553 [INFO][4130] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rzb9h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:00.570342 containerd[1286]: 2024-06-25 16:24:00.553 [INFO][4130] dataplane_linux.go 68: Setting the host side veth name to cali65c3675229f ContainerID="dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rzb9h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:00.570342 containerd[1286]: 2024-06-25 16:24:00.558 [INFO][4130] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rzb9h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:00.570342 containerd[1286]: 2024-06-25 16:24:00.558 [INFO][4130] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rzb9h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d669eea6-8c43-4a18-a92b-b250a05611e1", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7", Pod:"coredns-7db6d8ff4d-rzb9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65c3675229f", MAC:"52:2e:17:29:5a:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:00.570342 containerd[1286]: 2024-06-25 16:24:00.567 [INFO][4130] k8s.go 500: Wrote updated endpoint to datastore ContainerID="dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rzb9h" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:00.575000 audit[4218]: NETFILTER_CFG table=filter:109 family=2 entries=42 op=nft_register_chain pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:00.575000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7ffc058911f0 a2=0 a3=7ffc058911dc items=0 ppid=3662 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.575000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:00.578282 systemd[1]: Started cri-containerd-3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482.scope - libcontainer container 3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482. Jun 25 16:24:00.588000 audit: BPF prog-id=156 op=LOAD Jun 25 16:24:00.588000 audit: BPF prog-id=157 op=LOAD Jun 25 16:24:00.588000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4187 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337323530373762613864643735396635383839313639643961393132 Jun 25 16:24:00.588000 audit: BPF prog-id=158 op=LOAD Jun 25 16:24:00.588000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4187 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337323530373762613864643735396635383839313639643961393132 Jun 25 16:24:00.588000 audit: BPF prog-id=158 op=UNLOAD Jun 25 16:24:00.588000 audit: BPF prog-id=157 op=UNLOAD Jun 25 16:24:00.588000 audit: BPF prog-id=159 op=LOAD Jun 25 16:24:00.588000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4187 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337323530373762613864643735396635383839313639643961393132 Jun 25 16:24:00.590403 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:24:00.598729 containerd[1286]: time="2024-06-25T16:24:00.598639962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:00.598879 containerd[1286]: time="2024-06-25T16:24:00.598698073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:00.598879 containerd[1286]: time="2024-06-25T16:24:00.598718692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:00.598879 containerd[1286]: time="2024-06-25T16:24:00.598731617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:00.611862 containerd[1286]: time="2024-06-25T16:24:00.611814464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55495db4d7-r5s6p,Uid:836f38ef-e93b-478c-ac66-9060ef4334b9,Namespace:calico-system,Attempt:1,} returns sandbox id \"3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482\"" Jun 25 16:24:00.619225 systemd[1]: Started cri-containerd-dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7.scope - libcontainer container dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7. Jun 25 16:24:00.627000 audit: BPF prog-id=160 op=LOAD Jun 25 16:24:00.627000 audit: BPF prog-id=161 op=LOAD Jun 25 16:24:00.627000 audit[4249]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4239 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463323837633638643562653137636563353733613936316437323635 Jun 25 16:24:00.627000 audit: BPF prog-id=162 op=LOAD Jun 25 16:24:00.627000 audit[4249]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4239 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463323837633638643562653137636563353733613936316437323635 Jun 25 16:24:00.627000 audit: BPF prog-id=162 op=UNLOAD Jun 25 16:24:00.627000 audit: BPF prog-id=161 op=UNLOAD Jun 25 16:24:00.627000 audit: BPF prog-id=163 op=LOAD Jun 25 16:24:00.627000 audit[4249]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4239 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463323837633638643562653137636563353733613936316437323635 Jun 25 16:24:00.629592 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:24:00.655292 containerd[1286]: time="2024-06-25T16:24:00.655234163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzb9h,Uid:d669eea6-8c43-4a18-a92b-b250a05611e1,Namespace:kube-system,Attempt:1,} returns sandbox id \"dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7\"" Jun 25 16:24:00.656220 kubelet[2299]: E0625 16:24:00.655941 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:00.659042 containerd[1286]: time="2024-06-25T16:24:00.658494438Z" level=info msg="CreateContainer within sandbox \"dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:24:00.681635 containerd[1286]: time="2024-06-25T16:24:00.681576336Z" level=info msg="CreateContainer within sandbox \"dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31c3ba7bf65e3bee15c3af2d7051b853fda081c0fa8d9c7c8703966f3365b86f\"" Jun 25 16:24:00.682161 containerd[1286]: time="2024-06-25T16:24:00.682124315Z" level=info msg="StartContainer for \"31c3ba7bf65e3bee15c3af2d7051b853fda081c0fa8d9c7c8703966f3365b86f\"" Jun 25 16:24:00.708336 systemd[1]: Started cri-containerd-31c3ba7bf65e3bee15c3af2d7051b853fda081c0fa8d9c7c8703966f3365b86f.scope - libcontainer container 31c3ba7bf65e3bee15c3af2d7051b853fda081c0fa8d9c7c8703966f3365b86f. Jun 25 16:24:00.724000 audit: BPF prog-id=164 op=LOAD Jun 25 16:24:00.725000 audit: BPF prog-id=165 op=LOAD Jun 25 16:24:00.725000 audit[4283]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4239 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633362613762663635653362656531356333616632643730353162 Jun 25 16:24:00.725000 audit: BPF prog-id=166 op=LOAD Jun 25 16:24:00.725000 audit[4283]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4239 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633362613762663635653362656531356333616632643730353162 Jun 25 16:24:00.725000 audit: BPF prog-id=166 op=UNLOAD Jun 25 16:24:00.725000 audit: BPF prog-id=165 op=UNLOAD Jun 25 16:24:00.725000 audit: BPF prog-id=167 op=LOAD Jun 25 16:24:00.725000 audit[4283]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4239 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:00.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633362613762663635653362656531356333616632643730353162 Jun 25 16:24:00.798091 containerd[1286]: time="2024-06-25T16:24:00.797942075Z" level=info msg="StartContainer for \"31c3ba7bf65e3bee15c3af2d7051b853fda081c0fa8d9c7c8703966f3365b86f\" returns successfully" Jun 25 16:24:01.092930 kubelet[2299]: E0625 16:24:01.092785 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:01.122661 containerd[1286]: time="2024-06-25T16:24:01.122602099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:01.136983 containerd[1286]: time="2024-06-25T16:24:01.136906655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:24:01.150014 containerd[1286]: time="2024-06-25T16:24:01.149945207Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:01.155000 audit[4317]: NETFILTER_CFG table=filter:110 family=2 entries=14 op=nft_register_rule pid=4317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:01.157913 kernel: kauditd_printk_skb: 91 callbacks suppressed Jun 25 16:24:01.157974 kernel: audit: type=1325 audit(1719332641.155:642): table=filter:110 family=2 entries=14 op=nft_register_rule pid=4317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:01.158168 containerd[1286]: time="2024-06-25T16:24:01.158127062Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:01.155000 audit[4317]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffef2c2b1a0 a2=0 a3=7ffef2c2b18c items=0 ppid=2502 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:01.166031 kernel: audit: type=1300 audit(1719332641.155:642): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffef2c2b1a0 a2=0 a3=7ffef2c2b18c items=0 ppid=2502 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:01.155000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:01.156000 audit[4317]: NETFILTER_CFG table=nat:111 family=2 entries=14 op=nft_register_rule pid=4317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:01.169699 containerd[1286]: time="2024-06-25T16:24:01.169644712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:01.170902 containerd[1286]: time="2024-06-25T16:24:01.170841875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.158287241s" Jun 25 16:24:01.170902 containerd[1286]: time="2024-06-25T16:24:01.170898403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:24:01.171176 kernel: audit: type=1327 audit(1719332641.155:642): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:01.171216 kernel: audit: type=1325 audit(1719332641.156:643): table=nat:111 family=2 entries=14 op=nft_register_rule pid=4317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:01.171244 kernel: audit: type=1300 audit(1719332641.156:643): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffef2c2b1a0 a2=0 a3=0 items=0 ppid=2502 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:01.156000 audit[4317]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffef2c2b1a0 a2=0 a3=0 items=0 ppid=2502 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:01.172420 containerd[1286]: time="2024-06-25T16:24:01.172371112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:24:01.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:01.179780 containerd[1286]: time="2024-06-25T16:24:01.177798132Z" level=info msg="CreateContainer within sandbox \"73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:24:01.180102 kernel: audit: type=1327 audit(1719332641.156:643): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:01.537000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:01.565177 kernel: audit: type=1400 audit(1719332641.537:644): avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:01.537000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002d64f60 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:24:01.537000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:01.601745 kernel: audit: type=1300 audit(1719332641.537:644): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002d64f60 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:24:01.601852 kernel: audit: type=1327 audit(1719332641.537:644): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:01.601885 kernel: audit: type=1400 audit(1719332641.537:645): avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=520970 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:01.537000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=520970 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:01.537000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0026f9e00 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:24:01.537000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:01.652928 systemd[1]: Started sshd@14-10.0.0.90:22-10.0.0.1:49484.service - OpenSSH per-connection server daemon (10.0.0.1:49484). Jun 25 16:24:01.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.90:22-10.0.0.1:49484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:01.675000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=520970 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:01.675000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=520972 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:01.675000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6e a1=c009644300 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:24:01.675000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c007bc98c0 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:24:01.675000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:24:01.675000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:24:01.684000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:01.684000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6e a1=c0062a9860 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:24:01.684000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:24:01.711000 audit[4321]: USER_ACCT pid=4321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:01.712527 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 49484 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:01.711000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=520966 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:01.711000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c007d1d290 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:24:01.711000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:24:01.712000 audit[4321]: CRED_ACQ pid=4321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:01.712000 audit[4321]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffceb6fe520 a2=3 a3=7f6efee34480 items=0 ppid=1 pid=4321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:01.712000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:01.713890 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:01.714000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:01.714000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c0063649e0 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:24:01.714000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:24:01.717728 systemd-logind[1277]: New session 15 of user core. Jun 25 16:24:01.725000 audit[2178]: AVC avc: denied { watch } for pid=2178 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=520970 scontext=system_u:system_r:container_t:s0:c730,c849 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:01.725000 audit[2178]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c009644330 a2=fc6 a3=0 items=0 ppid=2009 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c730,c849 key=(null) Jun 25 16:24:01.725000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3930002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:24:01.727224 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:24:01.730000 audit[4321]: USER_START pid=4321 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:01.731000 audit[4323]: CRED_ACQ pid=4323 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:01.888040 sshd[4321]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:01.888000 audit[4321]: USER_END pid=4321 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:01.888000 audit[4321]: CRED_DISP pid=4321 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:01.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.90:22-10.0.0.1:49484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:01.890612 systemd[1]: sshd@14-10.0.0.90:22-10.0.0.1:49484.service: Deactivated successfully. Jun 25 16:24:01.891529 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:24:01.892213 systemd-logind[1277]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:24:01.892938 systemd-logind[1277]: Removed session 15. Jun 25 16:24:01.903225 systemd-networkd[1115]: cali65c3675229f: Gained IPv6LL Jun 25 16:24:01.920445 containerd[1286]: time="2024-06-25T16:24:01.919785503Z" level=info msg="StopPodSandbox for \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\"" Jun 25 16:24:01.930287 containerd[1286]: time="2024-06-25T16:24:01.930229669Z" level=info msg="CreateContainer within sandbox \"73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a57f064419ba99cad75f80bec158af4f3c67fecb7bf122dd5171ff50cbb511fe\"" Jun 25 16:24:01.930901 containerd[1286]: time="2024-06-25T16:24:01.930867080Z" level=info msg="StartContainer for \"a57f064419ba99cad75f80bec158af4f3c67fecb7bf122dd5171ff50cbb511fe\"" Jun 25 16:24:01.961459 systemd[1]: Started cri-containerd-a57f064419ba99cad75f80bec158af4f3c67fecb7bf122dd5171ff50cbb511fe.scope - libcontainer container a57f064419ba99cad75f80bec158af4f3c67fecb7bf122dd5171ff50cbb511fe. Jun 25 16:24:01.972000 audit: BPF prog-id=168 op=LOAD Jun 25 16:24:01.972000 audit[4363]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3976 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:01.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376630363434313962613939636164373566383062656331353861 Jun 25 16:24:01.972000 audit: BPF prog-id=169 op=LOAD Jun 25 16:24:01.972000 audit[4363]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3976 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:01.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376630363434313962613939636164373566383062656331353861 Jun 25 16:24:01.972000 audit: BPF prog-id=169 op=UNLOAD Jun 25 16:24:01.972000 audit: BPF prog-id=168 op=UNLOAD Jun 25 16:24:01.972000 audit: BPF prog-id=170 op=LOAD Jun 25 16:24:01.972000 audit[4363]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3976 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:01.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376630363434313962613939636164373566383062656331353861 Jun 25 16:24:01.998596 kubelet[2299]: I0625 16:24:01.998403 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rzb9h" podStartSLOduration=41.998382697 podStartE2EDuration="41.998382697s" podCreationTimestamp="2024-06-25 16:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:24:01.119628196 +0000 UTC m=+57.276858528" watchObservedRunningTime="2024-06-25 16:24:01.998382697 +0000 UTC m=+58.155613039" Jun 25 16:24:02.145353 containerd[1286]: time="2024-06-25T16:24:02.145233016Z" level=info msg="StartContainer for \"a57f064419ba99cad75f80bec158af4f3c67fecb7bf122dd5171ff50cbb511fe\" returns successfully" Jun 25 16:24:02.150546 kubelet[2299]: E0625 16:24:02.150481 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:01.998 [INFO][4349] k8s.go 608: Cleaning up netns ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:01.999 [INFO][4349] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" iface="eth0" netns="/var/run/netns/cni-debc7e91-2517-1cae-1b07-da91192090fc" Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:02.000 [INFO][4349] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" iface="eth0" netns="/var/run/netns/cni-debc7e91-2517-1cae-1b07-da91192090fc" Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:02.000 [INFO][4349] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" iface="eth0" netns="/var/run/netns/cni-debc7e91-2517-1cae-1b07-da91192090fc" Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:02.000 [INFO][4349] k8s.go 615: Releasing IP address(es) ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:02.000 [INFO][4349] utils.go 188: Calico CNI releasing IP address ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:02.130 [INFO][4390] ipam_plugin.go 411: Releasing address using handleID ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" HandleID="k8s-pod-network.01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:02.130 [INFO][4390] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:02.130 [INFO][4390] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:02.204 [WARNING][4390] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" HandleID="k8s-pod-network.01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:02.204 [INFO][4390] ipam_plugin.go 439: Releasing address using workloadID ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" HandleID="k8s-pod-network.01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:02.206 [INFO][4390] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:02.209721 containerd[1286]: 2024-06-25 16:24:02.208 [INFO][4349] k8s.go 621: Teardown processing complete. ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:02.211745 systemd[1]: run-netns-cni\x2ddebc7e91\x2d2517\x2d1cae\x2d1b07\x2dda91192090fc.mount: Deactivated successfully. Jun 25 16:24:02.212605 containerd[1286]: time="2024-06-25T16:24:02.212553922Z" level=info msg="TearDown network for sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\" successfully" Jun 25 16:24:02.212605 containerd[1286]: time="2024-06-25T16:24:02.212601373Z" level=info msg="StopPodSandbox for \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\" returns successfully" Jun 25 16:24:02.212891 kubelet[2299]: E0625 16:24:02.212870 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:02.213266 containerd[1286]: time="2024-06-25T16:24:02.213228493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7p2pf,Uid:7a782fdb-c775-468b-a146-70b65f402d66,Namespace:kube-system,Attempt:1,}" Jun 25 16:24:02.535580 kubelet[2299]: I0625 16:24:02.535390 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8m25c" podStartSLOduration=30.558105666 podStartE2EDuration="34.535368142s" podCreationTimestamp="2024-06-25 16:23:28 +0000 UTC" firstStartedPulling="2024-06-25 16:23:57.194743297 +0000 UTC m=+53.351973629" lastFinishedPulling="2024-06-25 16:24:01.172005742 +0000 UTC m=+57.329236105" observedRunningTime="2024-06-25 16:24:02.268702437 +0000 UTC m=+58.425932779" watchObservedRunningTime="2024-06-25 16:24:02.535368142 +0000 UTC m=+58.692598474" Jun 25 16:24:02.543235 systemd-networkd[1115]: cali9e34231c07e: Gained IPv6LL Jun 25 16:24:02.561000 audit[4401]: NETFILTER_CFG table=filter:112 family=2 entries=11 op=nft_register_rule pid=4401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:02.561000 audit[4401]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe584a1b10 a2=0 a3=7ffe584a1afc items=0 ppid=2502 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.561000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:02.562000 audit[4401]: NETFILTER_CFG table=nat:113 family=2 entries=35 op=nft_register_chain pid=4401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:02.562000 audit[4401]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe584a1b10 a2=0 a3=7ffe584a1afc items=0 ppid=2502 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:02.851743 systemd-networkd[1115]: calicf494c86e3b: Link UP Jun 25 16:24:02.853857 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:24:02.853908 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicf494c86e3b: link becomes ready Jun 25 16:24:02.853989 systemd-networkd[1115]: calicf494c86e3b: Gained carrier Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.668 [INFO][4409] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0 coredns-7db6d8ff4d- kube-system 7a782fdb-c775-468b-a146-70b65f402d66 955 0 2024-06-25 16:23:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-7p2pf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicf494c86e3b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7p2pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7p2pf-" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.669 [INFO][4409] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7p2pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.697 [INFO][4417] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" HandleID="k8s-pod-network.a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.705 [INFO][4417] ipam_plugin.go 264: Auto assigning IP ContainerID="a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" HandleID="k8s-pod-network.a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059c000), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-7p2pf", "timestamp":"2024-06-25 16:24:02.697456511 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.705 [INFO][4417] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.705 [INFO][4417] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.705 [INFO][4417] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.707 [INFO][4417] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" host="localhost" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.712 [INFO][4417] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.716 [INFO][4417] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.718 [INFO][4417] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.720 [INFO][4417] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.720 [INFO][4417] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" host="localhost" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.721 [INFO][4417] ipam.go 1685: Creating new handle: k8s-pod-network.a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81 Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.725 [INFO][4417] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" host="localhost" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.847 [INFO][4417] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" host="localhost" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.847 [INFO][4417] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" host="localhost" Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.847 [INFO][4417] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:02.866645 containerd[1286]: 2024-06-25 16:24:02.847 [INFO][4417] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" HandleID="k8s-pod-network.a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:02.867294 containerd[1286]: 2024-06-25 16:24:02.849 [INFO][4409] k8s.go 386: Populated endpoint ContainerID="a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7p2pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a782fdb-c775-468b-a146-70b65f402d66", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-7p2pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf494c86e3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:02.867294 containerd[1286]: 2024-06-25 16:24:02.849 [INFO][4409] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7p2pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:02.867294 containerd[1286]: 2024-06-25 16:24:02.849 [INFO][4409] dataplane_linux.go 68: Setting the host side veth name to calicf494c86e3b ContainerID="a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7p2pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:02.867294 containerd[1286]: 2024-06-25 16:24:02.851 [INFO][4409] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7p2pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:02.867294 containerd[1286]: 2024-06-25 16:24:02.854 [INFO][4409] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7p2pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a782fdb-c775-468b-a146-70b65f402d66", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81", Pod:"coredns-7db6d8ff4d-7p2pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf494c86e3b", MAC:"d2:ea:16:27:a2:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:02.867294 containerd[1286]: 2024-06-25 16:24:02.864 [INFO][4409] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7p2pf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:02.895000 audit[4438]: NETFILTER_CFG table=filter:114 family=2 entries=38 op=nft_register_chain pid=4438 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:02.895000 audit[4438]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7ffe199fc410 a2=0 a3=7ffe199fc3fc items=0 ppid=3662 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.895000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:02.898427 containerd[1286]: time="2024-06-25T16:24:02.898281291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:02.898427 containerd[1286]: time="2024-06-25T16:24:02.898339543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:02.898427 containerd[1286]: time="2024-06-25T16:24:02.898353990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:02.898427 containerd[1286]: time="2024-06-25T16:24:02.898363619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:02.922344 systemd[1]: Started cri-containerd-a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81.scope - libcontainer container a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81. Jun 25 16:24:02.932000 audit: BPF prog-id=171 op=LOAD Jun 25 16:24:02.933000 audit: BPF prog-id=172 op=LOAD Jun 25 16:24:02.933000 audit[4460]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4447 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134306364363162346434376264643839623661313632626438623039 Jun 25 16:24:02.933000 audit: BPF prog-id=173 op=LOAD Jun 25 16:24:02.933000 audit[4460]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4447 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134306364363162346434376264643839623661313632626438623039 Jun 25 16:24:02.933000 audit: BPF prog-id=173 op=UNLOAD Jun 25 16:24:02.933000 audit: BPF prog-id=172 op=UNLOAD Jun 25 16:24:02.933000 audit: BPF prog-id=174 op=LOAD Jun 25 16:24:02.933000 audit[4460]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4447 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:02.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134306364363162346434376264643839623661313632626438623039 Jun 25 16:24:02.935049 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:24:02.963736 containerd[1286]: time="2024-06-25T16:24:02.963609544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7p2pf,Uid:7a782fdb-c775-468b-a146-70b65f402d66,Namespace:kube-system,Attempt:1,} returns sandbox id \"a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81\"" Jun 25 16:24:02.964621 kubelet[2299]: E0625 16:24:02.964591 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:02.966790 containerd[1286]: time="2024-06-25T16:24:02.966754132Z" level=info msg="CreateContainer within sandbox \"a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:24:02.982834 containerd[1286]: time="2024-06-25T16:24:02.982785221Z" level=info msg="CreateContainer within sandbox \"a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72cabf322d1a1775db771c3622c9db48004c8537ab1252fbc2492b004e1be178\"" Jun 25 16:24:02.983450 containerd[1286]: time="2024-06-25T16:24:02.983416489Z" level=info msg="StartContainer for \"72cabf322d1a1775db771c3622c9db48004c8537ab1252fbc2492b004e1be178\"" Jun 25 16:24:02.985756 kubelet[2299]: I0625 16:24:02.985725 2299 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:24:02.985756 kubelet[2299]: I0625 16:24:02.985757 2299 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:24:03.018210 systemd[1]: Started cri-containerd-72cabf322d1a1775db771c3622c9db48004c8537ab1252fbc2492b004e1be178.scope - libcontainer container 72cabf322d1a1775db771c3622c9db48004c8537ab1252fbc2492b004e1be178. Jun 25 16:24:03.027000 audit: BPF prog-id=175 op=LOAD Jun 25 16:24:03.027000 audit: BPF prog-id=176 op=LOAD Jun 25 16:24:03.027000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4447 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:03.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732636162663332326431613137373564623737316333363232633964 Jun 25 16:24:03.027000 audit: BPF prog-id=177 op=LOAD Jun 25 16:24:03.027000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4447 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:03.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732636162663332326431613137373564623737316333363232633964 Jun 25 16:24:03.027000 audit: BPF prog-id=177 op=UNLOAD Jun 25 16:24:03.027000 audit: BPF prog-id=176 op=UNLOAD Jun 25 16:24:03.027000 audit: BPF prog-id=178 op=LOAD Jun 25 16:24:03.027000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4447 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:03.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732636162663332326431613137373564623737316333363232633964 Jun 25 16:24:03.041288 containerd[1286]: time="2024-06-25T16:24:03.041246394Z" level=info msg="StartContainer for \"72cabf322d1a1775db771c3622c9db48004c8537ab1252fbc2492b004e1be178\" returns successfully" Jun 25 16:24:03.157220 kubelet[2299]: E0625 16:24:03.156800 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:03.158642 kubelet[2299]: E0625 16:24:03.158604 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:03.183285 kubelet[2299]: I0625 16:24:03.182939 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7p2pf" podStartSLOduration=43.182917758 podStartE2EDuration="43.182917758s" podCreationTimestamp="2024-06-25 16:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:24:03.176282605 +0000 UTC m=+59.333512937" watchObservedRunningTime="2024-06-25 16:24:03.182917758 +0000 UTC m=+59.340148090" Jun 25 16:24:03.202000 audit[4534]: NETFILTER_CFG table=filter:115 family=2 entries=8 op=nft_register_rule pid=4534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:03.202000 audit[4534]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe743104c0 a2=0 a3=7ffe743104ac items=0 ppid=2502 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:03.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:03.203000 audit[4534]: NETFILTER_CFG table=nat:116 family=2 entries=44 op=nft_register_rule pid=4534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:03.203000 audit[4534]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe743104c0 a2=0 a3=7ffe743104ac items=0 ppid=2502 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:03.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:03.805980 containerd[1286]: time="2024-06-25T16:24:03.805931544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:03.807088 containerd[1286]: time="2024-06-25T16:24:03.807042800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:24:03.808459 containerd[1286]: time="2024-06-25T16:24:03.808438330Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:03.810615 containerd[1286]: time="2024-06-25T16:24:03.810582692Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:03.812413 containerd[1286]: time="2024-06-25T16:24:03.812375952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:03.813107 containerd[1286]: time="2024-06-25T16:24:03.813038290Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.640625668s" Jun 25 16:24:03.813107 containerd[1286]: time="2024-06-25T16:24:03.813103033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:24:03.822141 containerd[1286]: time="2024-06-25T16:24:03.820674048Z" level=info msg="CreateContainer within sandbox \"3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:24:03.837037 containerd[1286]: time="2024-06-25T16:24:03.836976906Z" level=info msg="CreateContainer within sandbox \"3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8d1401458b52f8557399d944cd8c309ea146643ac39b378177419d040b035505\"" Jun 25 16:24:03.837605 containerd[1286]: time="2024-06-25T16:24:03.837585349Z" level=info msg="StartContainer for \"8d1401458b52f8557399d944cd8c309ea146643ac39b378177419d040b035505\"" Jun 25 16:24:03.862583 systemd[1]: Started cri-containerd-8d1401458b52f8557399d944cd8c309ea146643ac39b378177419d040b035505.scope - libcontainer container 8d1401458b52f8557399d944cd8c309ea146643ac39b378177419d040b035505. Jun 25 16:24:03.874000 audit: BPF prog-id=179 op=LOAD Jun 25 16:24:03.874000 audit: BPF prog-id=180 op=LOAD Jun 25 16:24:03.874000 audit[4553]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=4187 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:03.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864313430313435386235326638353537333939643934346364386333 Jun 25 16:24:03.874000 audit: BPF prog-id=181 op=LOAD Jun 25 16:24:03.874000 audit[4553]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=4187 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:03.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864313430313435386235326638353537333939643934346364386333 Jun 25 16:24:03.874000 audit: BPF prog-id=181 op=UNLOAD Jun 25 16:24:03.874000 audit: BPF prog-id=180 op=UNLOAD Jun 25 16:24:03.874000 audit: BPF prog-id=182 op=LOAD Jun 25 16:24:03.874000 audit[4553]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=4187 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:03.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864313430313435386235326638353537333939643934346364386333 Jun 25 16:24:03.903231 containerd[1286]: time="2024-06-25T16:24:03.903184194Z" level=info msg="StartContainer for \"8d1401458b52f8557399d944cd8c309ea146643ac39b378177419d040b035505\" returns successfully" Jun 25 16:24:03.917811 containerd[1286]: time="2024-06-25T16:24:03.914658364Z" level=info msg="StopPodSandbox for \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\"" Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.017 [WARNING][4597] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0", GenerateName:"calico-kube-controllers-55495db4d7-", Namespace:"calico-system", SelfLink:"", UID:"836f38ef-e93b-478c-ac66-9060ef4334b9", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55495db4d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482", Pod:"calico-kube-controllers-55495db4d7-r5s6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9e34231c07e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.017 [INFO][4597] k8s.go 608: Cleaning up netns ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.017 [INFO][4597] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" iface="eth0" netns="" Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.017 [INFO][4597] k8s.go 615: Releasing IP address(es) ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.017 [INFO][4597] utils.go 188: Calico CNI releasing IP address ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.051 [INFO][4606] ipam_plugin.go 411: Releasing address using handleID ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" HandleID="k8s-pod-network.ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.051 [INFO][4606] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.051 [INFO][4606] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.059 [WARNING][4606] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" HandleID="k8s-pod-network.ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.059 [INFO][4606] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" HandleID="k8s-pod-network.ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.061 [INFO][4606] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:04.066439 containerd[1286]: 2024-06-25 16:24:04.064 [INFO][4597] k8s.go 621: Teardown processing complete. ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:04.066439 containerd[1286]: time="2024-06-25T16:24:04.065875836Z" level=info msg="TearDown network for sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\" successfully" Jun 25 16:24:04.066439 containerd[1286]: time="2024-06-25T16:24:04.065916123Z" level=info msg="StopPodSandbox for \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\" returns successfully" Jun 25 16:24:04.072990 containerd[1286]: time="2024-06-25T16:24:04.072450859Z" level=info msg="RemovePodSandbox for \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\"" Jun 25 16:24:04.084126 containerd[1286]: time="2024-06-25T16:24:04.075572639Z" level=info msg="Forcibly stopping sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\"" Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.116 [WARNING][4632] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0", GenerateName:"calico-kube-controllers-55495db4d7-", Namespace:"calico-system", SelfLink:"", UID:"836f38ef-e93b-478c-ac66-9060ef4334b9", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55495db4d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3725077ba8dd759f5889169d9a912cfa683cdf3323705a6ced4df7e706215482", Pod:"calico-kube-controllers-55495db4d7-r5s6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9e34231c07e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.116 [INFO][4632] k8s.go 608: Cleaning up netns ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.116 [INFO][4632] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" iface="eth0" netns="" Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.116 [INFO][4632] k8s.go 615: Releasing IP address(es) ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.116 [INFO][4632] utils.go 188: Calico CNI releasing IP address ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.136 [INFO][4639] ipam_plugin.go 411: Releasing address using handleID ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" HandleID="k8s-pod-network.ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.136 [INFO][4639] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.136 [INFO][4639] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.141 [WARNING][4639] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" HandleID="k8s-pod-network.ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.141 [INFO][4639] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" HandleID="k8s-pod-network.ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Workload="localhost-k8s-calico--kube--controllers--55495db4d7--r5s6p-eth0" Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.142 [INFO][4639] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:04.144963 containerd[1286]: 2024-06-25 16:24:04.143 [INFO][4632] k8s.go 621: Teardown processing complete. ContainerID="ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea" Jun 25 16:24:04.145480 containerd[1286]: time="2024-06-25T16:24:04.145006522Z" level=info msg="TearDown network for sandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\" successfully" Jun 25 16:24:04.161388 kubelet[2299]: E0625 16:24:04.161357 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:04.515000 audit[4648]: NETFILTER_CFG table=filter:117 family=2 entries=8 op=nft_register_rule pid=4648 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:04.515000 audit[4648]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff7d04e520 a2=0 a3=7fff7d04e50c items=0 ppid=2502 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:04.515000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:04.543000 audit[4648]: NETFILTER_CFG table=nat:118 family=2 entries=56 op=nft_register_chain pid=4648 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:04.543000 audit[4648]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff7d04e520 a2=0 a3=7fff7d04e50c items=0 ppid=2502 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:04.543000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:04.563789 containerd[1286]: time="2024-06-25T16:24:04.563716186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:24:04.563970 containerd[1286]: time="2024-06-25T16:24:04.563825936Z" level=info msg="RemovePodSandbox \"ddf9b1ae73fdb2c8c7210d5d9f275b1e498392f8c219409f40aef4bd6b22d9ea\" returns successfully" Jun 25 16:24:04.564376 containerd[1286]: time="2024-06-25T16:24:04.564351521Z" level=info msg="StopPodSandbox for \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\"" Jun 25 16:24:04.564616 containerd[1286]: time="2024-06-25T16:24:04.564551103Z" level=info msg="TearDown network for sandbox \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\" successfully" Jun 25 16:24:04.564616 containerd[1286]: time="2024-06-25T16:24:04.564608793Z" level=info msg="StopPodSandbox for \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\" returns successfully" Jun 25 16:24:04.564869 containerd[1286]: time="2024-06-25T16:24:04.564846588Z" level=info msg="RemovePodSandbox for \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\"" Jun 25 16:24:04.564924 containerd[1286]: time="2024-06-25T16:24:04.564874912Z" level=info msg="Forcibly stopping sandbox \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\"" Jun 25 16:24:04.564966 containerd[1286]: time="2024-06-25T16:24:04.564937402Z" level=info msg="TearDown network for sandbox \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\" successfully" Jun 25 16:24:04.625879 containerd[1286]: time="2024-06-25T16:24:04.625821724Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:24:04.626056 containerd[1286]: time="2024-06-25T16:24:04.625909032Z" level=info msg="RemovePodSandbox \"1dd5bafd085a6564a9fe1e3dea0ced34264fe6f891e316df9b09c502ef655d18\" returns successfully" Jun 25 16:24:04.626628 containerd[1286]: time="2024-06-25T16:24:04.626588941Z" level=info msg="StopPodSandbox for \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\"" Jun 25 16:24:04.626801 containerd[1286]: time="2024-06-25T16:24:04.626747234Z" level=info msg="TearDown network for sandbox \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\" successfully" Jun 25 16:24:04.626833 containerd[1286]: time="2024-06-25T16:24:04.626811577Z" level=info msg="StopPodSandbox for \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\" returns successfully" Jun 25 16:24:04.627518 containerd[1286]: time="2024-06-25T16:24:04.627470798Z" level=info msg="RemovePodSandbox for \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\"" Jun 25 16:24:04.627588 containerd[1286]: time="2024-06-25T16:24:04.627523078Z" level=info msg="Forcibly stopping sandbox \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\"" Jun 25 16:24:04.627684 containerd[1286]: time="2024-06-25T16:24:04.627650813Z" level=info msg="TearDown network for sandbox \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\" successfully" Jun 25 16:24:04.655359 systemd-networkd[1115]: calicf494c86e3b: Gained IPv6LL Jun 25 16:24:04.677037 containerd[1286]: time="2024-06-25T16:24:04.676964057Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:24:04.677229 containerd[1286]: time="2024-06-25T16:24:04.677114345Z" level=info msg="RemovePodSandbox \"a2fd9c816043199f494870e745528d0a8c56c3f770a34293dccb67a9a67e7996\" returns successfully" Jun 25 16:24:04.679050 containerd[1286]: time="2024-06-25T16:24:04.678851276Z" level=info msg="StopPodSandbox for \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\"" Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.728 [WARNING][4665] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a782fdb-c775-468b-a146-70b65f402d66", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81", Pod:"coredns-7db6d8ff4d-7p2pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf494c86e3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.728 [INFO][4665] k8s.go 608: Cleaning up netns ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.728 [INFO][4665] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" iface="eth0" netns="" Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.728 [INFO][4665] k8s.go 615: Releasing IP address(es) ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.728 [INFO][4665] utils.go 188: Calico CNI releasing IP address ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.753 [INFO][4672] ipam_plugin.go 411: Releasing address using handleID ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" HandleID="k8s-pod-network.01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.753 [INFO][4672] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.753 [INFO][4672] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.759 [WARNING][4672] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" HandleID="k8s-pod-network.01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.759 [INFO][4672] ipam_plugin.go 439: Releasing address using workloadID ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" HandleID="k8s-pod-network.01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.760 [INFO][4672] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:04.763595 containerd[1286]: 2024-06-25 16:24:04.762 [INFO][4665] k8s.go 621: Teardown processing complete. ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:04.764293 containerd[1286]: time="2024-06-25T16:24:04.763645083Z" level=info msg="TearDown network for sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\" successfully" Jun 25 16:24:04.764293 containerd[1286]: time="2024-06-25T16:24:04.763726729Z" level=info msg="StopPodSandbox for \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\" returns successfully" Jun 25 16:24:04.764402 containerd[1286]: time="2024-06-25T16:24:04.764371952Z" level=info msg="RemovePodSandbox for \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\"" Jun 25 16:24:04.764458 containerd[1286]: time="2024-06-25T16:24:04.764412360Z" level=info msg="Forcibly stopping sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\"" Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.814 [WARNING][4696] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a782fdb-c775-468b-a146-70b65f402d66", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a40cd61b4d47bdd89b6a162bd8b09765770309c9242dc8bd42cb6c8305afba81", Pod:"coredns-7db6d8ff4d-7p2pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf494c86e3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.814 [INFO][4696] k8s.go 608: Cleaning up netns ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.814 [INFO][4696] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" iface="eth0" netns="" Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.814 [INFO][4696] k8s.go 615: Releasing IP address(es) ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.814 [INFO][4696] utils.go 188: Calico CNI releasing IP address ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.835 [INFO][4703] ipam_plugin.go 411: Releasing address using handleID ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" HandleID="k8s-pod-network.01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.836 [INFO][4703] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.836 [INFO][4703] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.858 [WARNING][4703] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" HandleID="k8s-pod-network.01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.858 [INFO][4703] ipam_plugin.go 439: Releasing address using workloadID ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" HandleID="k8s-pod-network.01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Workload="localhost-k8s-coredns--7db6d8ff4d--7p2pf-eth0" Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.860 [INFO][4703] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:04.862518 containerd[1286]: 2024-06-25 16:24:04.861 [INFO][4696] k8s.go 621: Teardown processing complete. ContainerID="01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052" Jun 25 16:24:04.862518 containerd[1286]: time="2024-06-25T16:24:04.862486263Z" level=info msg="TearDown network for sandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\" successfully" Jun 25 16:24:04.947219 containerd[1286]: time="2024-06-25T16:24:04.947160931Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:24:04.947453 containerd[1286]: time="2024-06-25T16:24:04.947255552Z" level=info msg="RemovePodSandbox \"01e02f2cb96469c06e93d42e552f89144e1f662164be2c83aa2ad7802fc0a052\" returns successfully" Jun 25 16:24:04.947805 containerd[1286]: time="2024-06-25T16:24:04.947779023Z" level=info msg="StopPodSandbox for \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\"" Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.072 [WARNING][4726] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8m25c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85542427-f47c-46c9-a170-591e5c3b27fa", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a", Pod:"csi-node-driver-8m25c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3afc8c10cac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.072 [INFO][4726] k8s.go 608: Cleaning up netns ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.072 [INFO][4726] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" iface="eth0" netns="" Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.072 [INFO][4726] k8s.go 615: Releasing IP address(es) ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.073 [INFO][4726] utils.go 188: Calico CNI releasing IP address ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.095 [INFO][4734] ipam_plugin.go 411: Releasing address using handleID ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" HandleID="k8s-pod-network.41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.095 [INFO][4734] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.095 [INFO][4734] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.100 [WARNING][4734] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" HandleID="k8s-pod-network.41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.100 [INFO][4734] ipam_plugin.go 439: Releasing address using workloadID ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" HandleID="k8s-pod-network.41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.102 [INFO][4734] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:05.104719 containerd[1286]: 2024-06-25 16:24:05.103 [INFO][4726] k8s.go 621: Teardown processing complete. ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:24:05.105202 containerd[1286]: time="2024-06-25T16:24:05.104763094Z" level=info msg="TearDown network for sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\" successfully" Jun 25 16:24:05.105202 containerd[1286]: time="2024-06-25T16:24:05.104795006Z" level=info msg="StopPodSandbox for \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\" returns successfully" Jun 25 16:24:05.105382 containerd[1286]: time="2024-06-25T16:24:05.105334177Z" level=info msg="RemovePodSandbox for \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\"" Jun 25 16:24:05.105570 containerd[1286]: time="2024-06-25T16:24:05.105386857Z" level=info msg="Forcibly stopping sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\"" Jun 25 16:24:05.167910 kubelet[2299]: E0625 16:24:05.167523 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.139 [WARNING][4758] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8m25c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85542427-f47c-46c9-a170-591e5c3b27fa", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73415578bcdcbc04492fbd10d8f2c7c1697527fde54902ff324424c2c65eab8a", Pod:"csi-node-driver-8m25c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3afc8c10cac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.140 [INFO][4758] k8s.go 608: Cleaning up netns ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.140 [INFO][4758] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" iface="eth0" netns="" Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.140 [INFO][4758] k8s.go 615: Releasing IP address(es) ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.140 [INFO][4758] utils.go 188: Calico CNI releasing IP address ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.158 [INFO][4766] ipam_plugin.go 411: Releasing address using handleID ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" HandleID="k8s-pod-network.41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.158 [INFO][4766] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.158 [INFO][4766] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.166 [WARNING][4766] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" HandleID="k8s-pod-network.41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.166 [INFO][4766] ipam_plugin.go 439: Releasing address using workloadID ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" HandleID="k8s-pod-network.41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Workload="localhost-k8s-csi--node--driver--8m25c-eth0" Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.226 [INFO][4766] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:05.230356 containerd[1286]: 2024-06-25 16:24:05.228 [INFO][4758] k8s.go 621: Teardown processing complete. ContainerID="41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc" Jun 25 16:24:05.230825 containerd[1286]: time="2024-06-25T16:24:05.230385265Z" level=info msg="TearDown network for sandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\" successfully" Jun 25 16:24:05.242632 kubelet[2299]: I0625 16:24:05.242567 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55495db4d7-r5s6p" podStartSLOduration=33.041912788 podStartE2EDuration="36.24253423s" podCreationTimestamp="2024-06-25 16:23:29 +0000 UTC" firstStartedPulling="2024-06-25 16:24:00.613087251 +0000 UTC m=+56.770317583" lastFinishedPulling="2024-06-25 16:24:03.813708693 +0000 UTC m=+59.970939025" observedRunningTime="2024-06-25 16:24:04.18506029 +0000 UTC m=+60.342290622" watchObservedRunningTime="2024-06-25 16:24:05.24253423 +0000 UTC m=+61.399764592" Jun 25 16:24:05.306504 containerd[1286]: time="2024-06-25T16:24:05.306446371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:24:05.306697 containerd[1286]: time="2024-06-25T16:24:05.306545772Z" level=info msg="RemovePodSandbox \"41258e309d0e3d8518329afe011058da47bc81100abf03c1787b1101768d3bdc\" returns successfully" Jun 25 16:24:05.307147 containerd[1286]: time="2024-06-25T16:24:05.307118466Z" level=info msg="StopPodSandbox for \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\"" Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.570 [WARNING][4813] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d669eea6-8c43-4a18-a92b-b250a05611e1", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7", Pod:"coredns-7db6d8ff4d-rzb9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65c3675229f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.570 [INFO][4813] k8s.go 608: Cleaning up netns ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.570 [INFO][4813] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" iface="eth0" netns="" Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.570 [INFO][4813] k8s.go 615: Releasing IP address(es) ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.570 [INFO][4813] utils.go 188: Calico CNI releasing IP address ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.590 [INFO][4843] ipam_plugin.go 411: Releasing address using handleID ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" HandleID="k8s-pod-network.1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.590 [INFO][4843] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.590 [INFO][4843] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.597 [WARNING][4843] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" HandleID="k8s-pod-network.1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.597 [INFO][4843] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" HandleID="k8s-pod-network.1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.598 [INFO][4843] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:05.601803 containerd[1286]: 2024-06-25 16:24:05.600 [INFO][4813] k8s.go 621: Teardown processing complete. ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:05.601803 containerd[1286]: time="2024-06-25T16:24:05.601758347Z" level=info msg="TearDown network for sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\" successfully" Jun 25 16:24:05.602836 containerd[1286]: time="2024-06-25T16:24:05.602399303Z" level=info msg="StopPodSandbox for \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\" returns successfully" Jun 25 16:24:05.602990 containerd[1286]: time="2024-06-25T16:24:05.602956497Z" level=info msg="RemovePodSandbox for \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\"" Jun 25 16:24:05.603041 containerd[1286]: time="2024-06-25T16:24:05.602999359Z" level=info msg="Forcibly stopping sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\"" Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.643 [WARNING][4865] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d669eea6-8c43-4a18-a92b-b250a05611e1", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc287c68d5be17cec573a961d72651b50f58b340f503fa64e8deac62116072d7", Pod:"coredns-7db6d8ff4d-rzb9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65c3675229f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.643 [INFO][4865] k8s.go 608: Cleaning up netns ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.643 [INFO][4865] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" iface="eth0" netns="" Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.643 [INFO][4865] k8s.go 615: Releasing IP address(es) ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.643 [INFO][4865] utils.go 188: Calico CNI releasing IP address ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.666 [INFO][4873] ipam_plugin.go 411: Releasing address using handleID ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" HandleID="k8s-pod-network.1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.666 [INFO][4873] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.666 [INFO][4873] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.877 [WARNING][4873] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" HandleID="k8s-pod-network.1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.877 [INFO][4873] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" HandleID="k8s-pod-network.1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Workload="localhost-k8s-coredns--7db6d8ff4d--rzb9h-eth0" Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.878 [INFO][4873] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:05.881475 containerd[1286]: 2024-06-25 16:24:05.879 [INFO][4865] k8s.go 621: Teardown processing complete. ContainerID="1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d" Jun 25 16:24:05.882136 containerd[1286]: time="2024-06-25T16:24:05.881518082Z" level=info msg="TearDown network for sandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\" successfully" Jun 25 16:24:05.972841 containerd[1286]: time="2024-06-25T16:24:05.972782701Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:24:05.972994 containerd[1286]: time="2024-06-25T16:24:05.972885929Z" level=info msg="RemovePodSandbox \"1ea6ad44a7a03f9b210efd9d365ea65ab9be44ae0f444def30b79e44da4cd67d\" returns successfully" Jun 25 16:24:06.061000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:06.061000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:06.061000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0033d41a0 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:24:06.061000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:06.061000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c003570300 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:24:06.061000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:06.061000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:06.061000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=c0034c0000 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:24:06.061000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:06.061000 audit[2183]: AVC avc: denied { watch } for pid=2183 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=520964 scontext=system_u:system_r:container_t:s0:c120,c627 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:06.061000 audit[2183]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0033d41c0 a2=fc6 a3=0 items=0 ppid=2008 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c120,c627 key=(null) Jun 25 16:24:06.061000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:06.901260 systemd[1]: Started sshd@15-10.0.0.90:22-10.0.0.1:59040.service - OpenSSH per-connection server daemon (10.0.0.1:59040). Jun 25 16:24:06.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.90:22-10.0.0.1:59040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:06.907260 kernel: kauditd_printk_skb: 111 callbacks suppressed Jun 25 16:24:06.907324 kernel: audit: type=1130 audit(1719332646.900:695): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.90:22-10.0.0.1:59040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:06.940000 audit[4883]: USER_ACCT pid=4883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:06.942100 sshd[4883]: Accepted publickey for core from 10.0.0.1 port 59040 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:06.943457 sshd[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:06.941000 audit[4883]: CRED_ACQ pid=4883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:06.946982 systemd-logind[1277]: New session 16 of user core. Jun 25 16:24:06.948086 kernel: audit: type=1101 audit(1719332646.940:696): pid=4883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:06.948138 kernel: audit: type=1103 audit(1719332646.941:697): pid=4883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:06.948163 kernel: audit: type=1006 audit(1719332646.941:698): pid=4883 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:24:06.941000 audit[4883]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff39a5ee60 a2=3 a3=7f0722fde480 items=0 ppid=1 pid=4883 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:06.953995 kernel: audit: type=1300 audit(1719332646.941:698): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff39a5ee60 a2=3 a3=7f0722fde480 items=0 ppid=1 pid=4883 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:06.954045 kernel: audit: type=1327 audit(1719332646.941:698): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:06.941000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:06.954277 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:24:06.957000 audit[4883]: USER_START pid=4883 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:06.958000 audit[4885]: CRED_ACQ pid=4885 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:06.993092 kernel: audit: type=1105 audit(1719332646.957:699): pid=4883 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:06.993183 kernel: audit: type=1103 audit(1719332646.958:700): pid=4885 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:07.063085 sshd[4883]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:07.062000 audit[4883]: USER_END pid=4883 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:07.064970 systemd[1]: sshd@15-10.0.0.90:22-10.0.0.1:59040.service: Deactivated successfully. Jun 25 16:24:07.065671 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:24:07.066439 systemd-logind[1277]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:24:07.067154 systemd-logind[1277]: Removed session 16. Jun 25 16:24:07.062000 audit[4883]: CRED_DISP pid=4883 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:07.091107 kernel: audit: type=1106 audit(1719332647.062:701): pid=4883 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:07.091162 kernel: audit: type=1104 audit(1719332647.062:702): pid=4883 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:07.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.90:22-10.0.0.1:59040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.077428 systemd[1]: Started sshd@16-10.0.0.90:22-10.0.0.1:59048.service - OpenSSH per-connection server daemon (10.0.0.1:59048). Jun 25 16:24:12.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.90:22-10.0.0.1:59048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.096220 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:24:12.096341 kernel: audit: type=1130 audit(1719332652.076:704): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.90:22-10.0.0.1:59048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.122000 audit[4899]: USER_ACCT pid=4899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.124160 sshd[4899]: Accepted publickey for core from 10.0.0.1 port 59048 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:12.125550 sshd[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:12.124000 audit[4899]: CRED_ACQ pid=4899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.129908 systemd-logind[1277]: New session 17 of user core. Jun 25 16:24:12.130745 kernel: audit: type=1101 audit(1719332652.122:705): pid=4899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.130789 kernel: audit: type=1103 audit(1719332652.124:706): pid=4899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.130821 kernel: audit: type=1006 audit(1719332652.124:707): pid=4899 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:24:12.132613 kernel: audit: type=1300 audit(1719332652.124:707): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd7dfc120 a2=3 a3=7fdbc1f22480 items=0 ppid=1 pid=4899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:12.124000 audit[4899]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd7dfc120 a2=3 a3=7fdbc1f22480 items=0 ppid=1 pid=4899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:12.136281 kernel: audit: type=1327 audit(1719332652.124:707): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:12.124000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:12.142262 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:24:12.146000 audit[4899]: USER_START pid=4899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.146000 audit[4901]: CRED_ACQ pid=4901 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.154984 kernel: audit: type=1105 audit(1719332652.146:708): pid=4899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.155106 kernel: audit: type=1103 audit(1719332652.146:709): pid=4901 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.254248 sshd[4899]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:12.254000 audit[4899]: USER_END pid=4899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.256314 systemd[1]: sshd@16-10.0.0.90:22-10.0.0.1:59048.service: Deactivated successfully. Jun 25 16:24:12.257095 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:24:12.257631 systemd-logind[1277]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:24:12.258339 systemd-logind[1277]: Removed session 17. Jun 25 16:24:12.254000 audit[4899]: CRED_DISP pid=4899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.261723 kernel: audit: type=1106 audit(1719332652.254:710): pid=4899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.261859 kernel: audit: type=1104 audit(1719332652.254:711): pid=4899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:12.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.90:22-10.0.0.1:59048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:17.270432 systemd[1]: Started sshd@17-10.0.0.90:22-10.0.0.1:55802.service - OpenSSH per-connection server daemon (10.0.0.1:55802). Jun 25 16:24:17.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.90:22-10.0.0.1:55802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:17.271613 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:24:17.271668 kernel: audit: type=1130 audit(1719332657.269:713): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.90:22-10.0.0.1:55802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:17.306000 audit[4946]: USER_ACCT pid=4946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.307898 sshd[4946]: Accepted publickey for core from 10.0.0.1 port 55802 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:17.309310 sshd[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:17.307000 audit[4946]: CRED_ACQ pid=4946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.313361 systemd-logind[1277]: New session 18 of user core. Jun 25 16:24:17.315716 kernel: audit: type=1101 audit(1719332657.306:714): pid=4946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.315777 kernel: audit: type=1103 audit(1719332657.307:715): pid=4946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.315809 kernel: audit: type=1006 audit(1719332657.307:716): pid=4946 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jun 25 16:24:17.307000 audit[4946]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe78673c70 a2=3 a3=7f84de7a8480 items=0 ppid=1 pid=4946 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:17.321689 kernel: audit: type=1300 audit(1719332657.307:716): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe78673c70 a2=3 a3=7f84de7a8480 items=0 ppid=1 pid=4946 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:17.321746 kernel: audit: type=1327 audit(1719332657.307:716): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:17.307000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:17.332447 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:24:17.335000 audit[4946]: USER_START pid=4946 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.337000 audit[4948]: CRED_ACQ pid=4948 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.343328 kernel: audit: type=1105 audit(1719332657.335:717): pid=4946 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.343371 kernel: audit: type=1103 audit(1719332657.337:718): pid=4948 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.438338 sshd[4946]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:17.439000 audit[4946]: USER_END pid=4946 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.442926 systemd[1]: sshd@17-10.0.0.90:22-10.0.0.1:55802.service: Deactivated successfully. Jun 25 16:24:17.443900 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:24:17.439000 audit[4946]: CRED_DISP pid=4946 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.445303 systemd-logind[1277]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:24:17.446131 systemd-logind[1277]: Removed session 18. Jun 25 16:24:17.447781 kernel: audit: type=1106 audit(1719332657.439:719): pid=4946 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.447841 kernel: audit: type=1104 audit(1719332657.439:720): pid=4946 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:17.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.90:22-10.0.0.1:55802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:22.450269 systemd[1]: Started sshd@18-10.0.0.90:22-10.0.0.1:55816.service - OpenSSH per-connection server daemon (10.0.0.1:55816). Jun 25 16:24:22.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.90:22-10.0.0.1:55816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:22.508754 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:24:22.508864 kernel: audit: type=1130 audit(1719332662.449:722): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.90:22-10.0.0.1:55816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:22.530000 audit[4961]: USER_ACCT pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.532267 sshd[4961]: Accepted publickey for core from 10.0.0.1 port 55816 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:22.533304 sshd[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:22.531000 audit[4961]: CRED_ACQ pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.537096 systemd-logind[1277]: New session 19 of user core. Jun 25 16:24:22.538222 kernel: audit: type=1101 audit(1719332662.530:723): pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.538285 kernel: audit: type=1103 audit(1719332662.531:724): pid=4961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.538314 kernel: audit: type=1006 audit(1719332662.531:725): pid=4961 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jun 25 16:24:22.531000 audit[4961]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee90b1cb0 a2=3 a3=7effadafe480 items=0 ppid=1 pid=4961 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:22.543385 kernel: audit: type=1300 audit(1719332662.531:725): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee90b1cb0 a2=3 a3=7effadafe480 items=0 ppid=1 pid=4961 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:22.543423 kernel: audit: type=1327 audit(1719332662.531:725): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:22.531000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:22.546288 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:24:22.549000 audit[4961]: USER_START pid=4961 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.551000 audit[4963]: CRED_ACQ pid=4963 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.591946 kernel: audit: type=1105 audit(1719332662.549:726): pid=4961 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.591994 kernel: audit: type=1103 audit(1719332662.551:727): pid=4963 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.812453 sshd[4961]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:22.812000 audit[4961]: USER_END pid=4961 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.812000 audit[4961]: CRED_DISP pid=4961 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.819547 kernel: audit: type=1106 audit(1719332662.812:728): pid=4961 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.819599 kernel: audit: type=1104 audit(1719332662.812:729): pid=4961 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.823197 systemd[1]: sshd@18-10.0.0.90:22-10.0.0.1:55816.service: Deactivated successfully. Jun 25 16:24:22.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.90:22-10.0.0.1:55816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:22.823842 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:24:22.824425 systemd-logind[1277]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:24:22.825729 systemd[1]: Started sshd@19-10.0.0.90:22-10.0.0.1:55824.service - OpenSSH per-connection server daemon (10.0.0.1:55824). Jun 25 16:24:22.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.90:22-10.0.0.1:55824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:22.826625 systemd-logind[1277]: Removed session 19. Jun 25 16:24:22.856000 audit[4974]: USER_ACCT pid=4974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.857617 sshd[4974]: Accepted publickey for core from 10.0.0.1 port 55824 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:22.857000 audit[4974]: CRED_ACQ pid=4974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.857000 audit[4974]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7a7f0090 a2=3 a3=7fb321047480 items=0 ppid=1 pid=4974 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:22.857000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:22.858600 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:22.861893 systemd-logind[1277]: New session 20 of user core. Jun 25 16:24:22.867228 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:24:22.870000 audit[4974]: USER_START pid=4974 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:22.871000 audit[4976]: CRED_ACQ pid=4976 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:23.438546 sshd[4974]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:23.438000 audit[4974]: USER_END pid=4974 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:23.438000 audit[4974]: CRED_DISP pid=4974 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:23.450265 systemd[1]: sshd@19-10.0.0.90:22-10.0.0.1:55824.service: Deactivated successfully. Jun 25 16:24:23.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.90:22-10.0.0.1:55824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:23.450806 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:24:23.451351 systemd-logind[1277]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:24:23.452617 systemd[1]: Started sshd@20-10.0.0.90:22-10.0.0.1:55830.service - OpenSSH per-connection server daemon (10.0.0.1:55830). Jun 25 16:24:23.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.90:22-10.0.0.1:55830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:23.453359 systemd-logind[1277]: Removed session 20. Jun 25 16:24:23.487000 audit[4988]: USER_ACCT pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:23.488598 sshd[4988]: Accepted publickey for core from 10.0.0.1 port 55830 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:23.488000 audit[4988]: CRED_ACQ pid=4988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:23.488000 audit[4988]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc5f424f0 a2=3 a3=7faee259d480 items=0 ppid=1 pid=4988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:23.488000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:23.489576 sshd[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:23.493350 systemd-logind[1277]: New session 21 of user core. Jun 25 16:24:23.504314 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:24:23.507000 audit[4988]: USER_START pid=4988 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:23.508000 audit[4990]: CRED_ACQ pid=4990 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.258000 audit[5025]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=5025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:25.258000 audit[5025]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffde7135c90 a2=0 a3=7ffde7135c7c items=0 ppid=2502 pid=5025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:25.258000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:25.259000 audit[5025]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=5025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:25.259000 audit[5025]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffde7135c90 a2=0 a3=0 items=0 ppid=2502 pid=5025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:25.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:25.275000 audit[5027]: NETFILTER_CFG table=filter:121 family=2 entries=32 op=nft_register_rule pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:25.275000 audit[5027]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffce06a80a0 a2=0 a3=7ffce06a808c items=0 ppid=2502 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:25.275000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:25.279633 sshd[4988]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:25.281000 audit[4988]: USER_END pid=4988 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.281000 audit[4988]: CRED_DISP pid=4988 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.276000 audit[5027]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:25.276000 audit[5027]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffce06a80a0 a2=0 a3=0 items=0 ppid=2502 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:25.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:25.286912 systemd[1]: sshd@20-10.0.0.90:22-10.0.0.1:55830.service: Deactivated successfully. Jun 25 16:24:25.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.90:22-10.0.0.1:55830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:25.287550 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:24:25.288134 systemd-logind[1277]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:24:25.289517 systemd[1]: Started sshd@21-10.0.0.90:22-10.0.0.1:55844.service - OpenSSH per-connection server daemon (10.0.0.1:55844). Jun 25 16:24:25.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.90:22-10.0.0.1:55844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:25.290524 systemd-logind[1277]: Removed session 21. Jun 25 16:24:25.324000 audit[5030]: USER_ACCT pid=5030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.326056 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 55844 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:25.325000 audit[5030]: CRED_ACQ pid=5030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.325000 audit[5030]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa4decb50 a2=3 a3=7f5d0632b480 items=0 ppid=1 pid=5030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:25.325000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:25.327309 sshd[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:25.332803 systemd-logind[1277]: New session 22 of user core. Jun 25 16:24:25.342218 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:24:25.345000 audit[5030]: USER_START pid=5030 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.346000 audit[5032]: CRED_ACQ pid=5032 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.567858 sshd[5030]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:25.568000 audit[5030]: USER_END pid=5030 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.568000 audit[5030]: CRED_DISP pid=5030 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.578557 systemd[1]: sshd@21-10.0.0.90:22-10.0.0.1:55844.service: Deactivated successfully. Jun 25 16:24:25.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.90:22-10.0.0.1:55844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:25.579155 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:24:25.579611 systemd-logind[1277]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:24:25.580775 systemd[1]: Started sshd@22-10.0.0.90:22-10.0.0.1:55846.service - OpenSSH per-connection server daemon (10.0.0.1:55846). Jun 25 16:24:25.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.90:22-10.0.0.1:55846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:25.581466 systemd-logind[1277]: Removed session 22. Jun 25 16:24:25.612000 audit[5041]: USER_ACCT pid=5041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.613865 sshd[5041]: Accepted publickey for core from 10.0.0.1 port 55846 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:25.613000 audit[5041]: CRED_ACQ pid=5041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.613000 audit[5041]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec10f9290 a2=3 a3=7fbd635af480 items=0 ppid=1 pid=5041 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:25.613000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:25.615130 sshd[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:25.618548 systemd-logind[1277]: New session 23 of user core. Jun 25 16:24:25.627331 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:24:25.630000 audit[5041]: USER_START pid=5041 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.631000 audit[5043]: CRED_ACQ pid=5043 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.741415 sshd[5041]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:25.741000 audit[5041]: USER_END pid=5041 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.741000 audit[5041]: CRED_DISP pid=5041 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:25.743870 systemd[1]: sshd@22-10.0.0.90:22-10.0.0.1:55846.service: Deactivated successfully. Jun 25 16:24:25.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.90:22-10.0.0.1:55846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:25.744848 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:24:25.745483 systemd-logind[1277]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:24:25.746350 systemd-logind[1277]: Removed session 23. Jun 25 16:24:25.919890 kubelet[2299]: E0625 16:24:25.919842 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:28.920007 kubelet[2299]: E0625 16:24:28.919966 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:30.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.90:22-10.0.0.1:56954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:30.748665 kernel: kauditd_printk_skb: 57 callbacks suppressed Jun 25 16:24:30.748710 kernel: audit: type=1130 audit(1719332670.746:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.90:22-10.0.0.1:56954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:30.747555 systemd[1]: Started sshd@23-10.0.0.90:22-10.0.0.1:56954.service - OpenSSH per-connection server daemon (10.0.0.1:56954). Jun 25 16:24:30.798000 audit[5069]: USER_ACCT pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.799000 audit[5069]: CRED_ACQ pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.801050 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:30.804463 sshd[5069]: Accepted publickey for core from 10.0.0.1 port 56954 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:30.806732 kernel: audit: type=1101 audit(1719332670.798:772): pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.806801 kernel: audit: type=1103 audit(1719332670.799:773): pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.809162 kernel: audit: type=1006 audit(1719332670.799:774): pid=5069 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 16:24:30.807606 systemd-logind[1277]: New session 24 of user core. Jun 25 16:24:30.817716 kernel: audit: type=1300 audit(1719332670.799:774): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7bf92820 a2=3 a3=7f13f7e29480 items=0 ppid=1 pid=5069 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:30.817818 kernel: audit: type=1327 audit(1719332670.799:774): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:30.799000 audit[5069]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7bf92820 a2=3 a3=7f13f7e29480 items=0 ppid=1 pid=5069 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:30.799000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:30.817387 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:24:30.822000 audit[5069]: USER_START pid=5069 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.824000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.831507 kernel: audit: type=1105 audit(1719332670.822:775): pid=5069 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.831659 kernel: audit: type=1103 audit(1719332670.824:776): pid=5076 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.941231 sshd[5069]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:30.941000 audit[5069]: USER_END pid=5069 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.944501 systemd[1]: sshd@23-10.0.0.90:22-10.0.0.1:56954.service: Deactivated successfully. Jun 25 16:24:30.945516 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:24:30.941000 audit[5069]: CRED_DISP pid=5069 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.946417 systemd-logind[1277]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:24:30.947272 systemd-logind[1277]: Removed session 24. Jun 25 16:24:30.948988 kernel: audit: type=1106 audit(1719332670.941:777): pid=5069 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.949043 kernel: audit: type=1104 audit(1719332670.941:778): pid=5069 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:30.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.90:22-10.0.0.1:56954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:31.982000 audit[5087]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:31.982000 audit[5087]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd9d4d2f10 a2=0 a3=7ffd9d4d2efc items=0 ppid=2502 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.982000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:31.985000 audit[5087]: NETFILTER_CFG table=nat:124 family=2 entries=104 op=nft_register_chain pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:31.985000 audit[5087]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd9d4d2f10 a2=0 a3=7ffd9d4d2efc items=0 ppid=2502 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.985000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:35.492400 kubelet[2299]: E0625 16:24:35.492320 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:35.952354 systemd[1]: Started sshd@24-10.0.0.90:22-10.0.0.1:56966.service - OpenSSH per-connection server daemon (10.0.0.1:56966). Jun 25 16:24:35.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.90:22-10.0.0.1:56966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:35.953355 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:24:35.953408 kernel: audit: type=1130 audit(1719332675.951:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.90:22-10.0.0.1:56966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:35.989000 audit[5123]: USER_ACCT pid=5123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:35.990506 sshd[5123]: Accepted publickey for core from 10.0.0.1 port 56966 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:35.991823 sshd[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:35.990000 audit[5123]: CRED_ACQ pid=5123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:35.995119 systemd-logind[1277]: New session 25 of user core. Jun 25 16:24:35.996823 kernel: audit: type=1101 audit(1719332675.989:783): pid=5123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:35.996890 kernel: audit: type=1103 audit(1719332675.990:784): pid=5123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:35.996918 kernel: audit: type=1006 audit(1719332675.990:785): pid=5123 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:24:35.998875 kernel: audit: type=1300 audit(1719332675.990:785): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc124ad70 a2=3 a3=7f25232db480 items=0 ppid=1 pid=5123 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.990000 audit[5123]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc124ad70 a2=3 a3=7f25232db480 items=0 ppid=1 pid=5123 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.990000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:36.013682 kernel: audit: type=1327 audit(1719332675.990:785): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:36.020381 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:24:36.025000 audit[5123]: USER_START pid=5123 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:36.026000 audit[5125]: CRED_ACQ pid=5125 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:36.032779 kernel: audit: type=1105 audit(1719332676.025:786): pid=5123 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:36.032868 kernel: audit: type=1103 audit(1719332676.026:787): pid=5125 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:36.234650 sshd[5123]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:36.234000 audit[5123]: USER_END pid=5123 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:36.237557 systemd[1]: sshd@24-10.0.0.90:22-10.0.0.1:56966.service: Deactivated successfully. Jun 25 16:24:36.238445 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:24:36.239120 systemd-logind[1277]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:24:36.239940 systemd-logind[1277]: Removed session 25. Jun 25 16:24:36.234000 audit[5123]: CRED_DISP pid=5123 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:36.245967 kernel: audit: type=1106 audit(1719332676.234:788): pid=5123 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:36.246037 kernel: audit: type=1104 audit(1719332676.234:789): pid=5123 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:36.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.90:22-10.0.0.1:56966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:38.556000 audit[5137]: NETFILTER_CFG table=filter:125 family=2 entries=9 op=nft_register_rule pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:38.556000 audit[5137]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe52b5b0a0 a2=0 a3=7ffe52b5b08c items=0 ppid=2502 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:38.564791 kubelet[2299]: I0625 16:24:38.564743 2299 topology_manager.go:215] "Topology Admit Handler" podUID="50c1c478-8677-4fef-9985-2e5d892a0371" podNamespace="calico-apiserver" podName="calico-apiserver-6d54fcc8f5-mmg2l" Jun 25 16:24:38.559000 audit[5137]: NETFILTER_CFG table=nat:126 family=2 entries=44 op=nft_register_rule pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:38.559000 audit[5137]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffe52b5b0a0 a2=0 a3=7ffe52b5b08c items=0 ppid=2502 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:38.571493 systemd[1]: Created slice kubepods-besteffort-pod50c1c478_8677_4fef_9985_2e5d892a0371.slice - libcontainer container kubepods-besteffort-pod50c1c478_8677_4fef_9985_2e5d892a0371.slice. Jun 25 16:24:38.580000 audit[5139]: NETFILTER_CFG table=filter:127 family=2 entries=10 op=nft_register_rule pid=5139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:38.580000 audit[5139]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffef56f64c0 a2=0 a3=7ffef56f64ac items=0 ppid=2502 pid=5139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:38.583000 audit[5139]: NETFILTER_CFG table=nat:128 family=2 entries=44 op=nft_register_rule pid=5139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:38.583000 audit[5139]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffef56f64c0 a2=0 a3=7ffef56f64ac items=0 ppid=2502 pid=5139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.583000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:38.685021 kubelet[2299]: I0625 16:24:38.684970 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/50c1c478-8677-4fef-9985-2e5d892a0371-calico-apiserver-certs\") pod \"calico-apiserver-6d54fcc8f5-mmg2l\" (UID: \"50c1c478-8677-4fef-9985-2e5d892a0371\") " pod="calico-apiserver/calico-apiserver-6d54fcc8f5-mmg2l" Jun 25 16:24:38.685021 kubelet[2299]: I0625 16:24:38.685023 2299 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4bs6\" (UniqueName: \"kubernetes.io/projected/50c1c478-8677-4fef-9985-2e5d892a0371-kube-api-access-c4bs6\") pod \"calico-apiserver-6d54fcc8f5-mmg2l\" (UID: \"50c1c478-8677-4fef-9985-2e5d892a0371\") " pod="calico-apiserver/calico-apiserver-6d54fcc8f5-mmg2l" Jun 25 16:24:38.874903 containerd[1286]: time="2024-06-25T16:24:38.874838451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d54fcc8f5-mmg2l,Uid:50c1c478-8677-4fef-9985-2e5d892a0371,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:24:39.002492 systemd-networkd[1115]: cali9bad4acf716: Link UP Jun 25 16:24:39.004486 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:24:39.004646 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9bad4acf716: link becomes ready Jun 25 16:24:39.004831 systemd-networkd[1115]: cali9bad4acf716: Gained carrier Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.920 [INFO][5143] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0 calico-apiserver-6d54fcc8f5- calico-apiserver 50c1c478-8677-4fef-9985-2e5d892a0371 1225 0 2024-06-25 16:24:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d54fcc8f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d54fcc8f5-mmg2l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9bad4acf716 [] []}} ContainerID="a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fcc8f5-mmg2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.920 [INFO][5143] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fcc8f5-mmg2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.951 [INFO][5155] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" HandleID="k8s-pod-network.a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" Workload="localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.961 [INFO][5155] ipam_plugin.go 264: Auto assigning IP ContainerID="a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" HandleID="k8s-pod-network.a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" Workload="localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eaf90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d54fcc8f5-mmg2l", "timestamp":"2024-06-25 16:24:38.951909381 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.961 [INFO][5155] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.961 [INFO][5155] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.961 [INFO][5155] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.963 [INFO][5155] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" host="localhost" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.968 [INFO][5155] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.975 [INFO][5155] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.978 [INFO][5155] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.981 [INFO][5155] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.981 [INFO][5155] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" host="localhost" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.983 [INFO][5155] ipam.go 1685: Creating new handle: k8s-pod-network.a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8 Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.988 [INFO][5155] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" host="localhost" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.998 [INFO][5155] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" host="localhost" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.998 [INFO][5155] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" host="localhost" Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.998 [INFO][5155] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:24:39.024388 containerd[1286]: 2024-06-25 16:24:38.998 [INFO][5155] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" HandleID="k8s-pod-network.a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" Workload="localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0" Jun 25 16:24:39.026373 containerd[1286]: 2024-06-25 16:24:39.000 [INFO][5143] k8s.go 386: Populated endpoint ContainerID="a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fcc8f5-mmg2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0", GenerateName:"calico-apiserver-6d54fcc8f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"50c1c478-8677-4fef-9985-2e5d892a0371", ResourceVersion:"1225", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d54fcc8f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d54fcc8f5-mmg2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9bad4acf716", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:39.026373 containerd[1286]: 2024-06-25 16:24:39.000 [INFO][5143] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fcc8f5-mmg2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0" Jun 25 16:24:39.026373 containerd[1286]: 2024-06-25 16:24:39.000 [INFO][5143] dataplane_linux.go 68: Setting the host side veth name to cali9bad4acf716 ContainerID="a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fcc8f5-mmg2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0" Jun 25 16:24:39.026373 containerd[1286]: 2024-06-25 16:24:39.004 [INFO][5143] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fcc8f5-mmg2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0" Jun 25 16:24:39.026373 containerd[1286]: 2024-06-25 16:24:39.004 [INFO][5143] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fcc8f5-mmg2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0", GenerateName:"calico-apiserver-6d54fcc8f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"50c1c478-8677-4fef-9985-2e5d892a0371", ResourceVersion:"1225", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d54fcc8f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8", Pod:"calico-apiserver-6d54fcc8f5-mmg2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9bad4acf716", MAC:"06:19:67:9d:52:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:24:39.026373 containerd[1286]: 2024-06-25 16:24:39.018 [INFO][5143] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8" Namespace="calico-apiserver" Pod="calico-apiserver-6d54fcc8f5-mmg2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d54fcc8f5--mmg2l-eth0" Jun 25 16:24:39.039000 audit[5179]: NETFILTER_CFG table=filter:129 family=2 entries=61 op=nft_register_chain pid=5179 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:24:39.039000 audit[5179]: SYSCALL arch=c000003e syscall=46 success=yes exit=30316 a0=3 a1=7fff3428c880 a2=0 a3=7fff3428c86c items=0 ppid=3662 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:39.039000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:24:39.058456 containerd[1286]: time="2024-06-25T16:24:39.058357916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:39.058456 containerd[1286]: time="2024-06-25T16:24:39.058420825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:39.058776 containerd[1286]: time="2024-06-25T16:24:39.058439942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:39.058776 containerd[1286]: time="2024-06-25T16:24:39.058453918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:39.078317 systemd[1]: Started cri-containerd-a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8.scope - libcontainer container a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8. Jun 25 16:24:39.089000 audit: BPF prog-id=183 op=LOAD Jun 25 16:24:39.090000 audit: BPF prog-id=184 op=LOAD Jun 25 16:24:39.090000 audit[5199]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=5189 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:39.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137616333393739613437643665646130326263303866663731313031 Jun 25 16:24:39.090000 audit: BPF prog-id=185 op=LOAD Jun 25 16:24:39.090000 audit[5199]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=5189 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:39.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137616333393739613437643665646130326263303866663731313031 Jun 25 16:24:39.090000 audit: BPF prog-id=185 op=UNLOAD Jun 25 16:24:39.090000 audit: BPF prog-id=184 op=UNLOAD Jun 25 16:24:39.090000 audit: BPF prog-id=186 op=LOAD Jun 25 16:24:39.090000 audit[5199]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=5189 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:39.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137616333393739613437643665646130326263303866663731313031 Jun 25 16:24:39.092855 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:24:39.119375 containerd[1286]: time="2024-06-25T16:24:39.119325974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d54fcc8f5-mmg2l,Uid:50c1c478-8677-4fef-9985-2e5d892a0371,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8\"" Jun 25 16:24:39.121698 containerd[1286]: time="2024-06-25T16:24:39.121651960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:24:39.920666 kubelet[2299]: E0625 16:24:39.920553 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:40.304715 systemd-networkd[1115]: cali9bad4acf716: Gained IPv6LL Jun 25 16:24:41.253259 kernel: kauditd_printk_skb: 28 callbacks suppressed Jun 25 16:24:41.253407 kernel: audit: type=1130 audit(1719332681.250:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.90:22-10.0.0.1:33836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:41.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.90:22-10.0.0.1:33836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:41.251638 systemd[1]: Started sshd@25-10.0.0.90:22-10.0.0.1:33836.service - OpenSSH per-connection server daemon (10.0.0.1:33836). Jun 25 16:24:41.309897 sshd[5227]: Accepted publickey for core from 10.0.0.1 port 33836 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:41.308000 audit[5227]: USER_ACCT pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.315000 audit[5227]: CRED_ACQ pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.317947 sshd[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:41.323500 kernel: audit: type=1101 audit(1719332681.308:803): pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.323649 kernel: audit: type=1103 audit(1719332681.315:804): pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.323673 kernel: audit: type=1006 audit(1719332681.316:805): pid=5227 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 16:24:41.316000 audit[5227]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4e43cd60 a2=3 a3=7f9d24d89480 items=0 ppid=1 pid=5227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:41.328608 kernel: audit: type=1300 audit(1719332681.316:805): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4e43cd60 a2=3 a3=7f9d24d89480 items=0 ppid=1 pid=5227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:41.328795 kernel: audit: type=1327 audit(1719332681.316:805): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:41.316000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:41.327867 systemd-logind[1277]: New session 26 of user core. Jun 25 16:24:41.334501 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 16:24:41.340000 audit[5227]: USER_START pid=5227 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.342000 audit[5229]: CRED_ACQ pid=5229 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.350282 kernel: audit: type=1105 audit(1719332681.340:806): pid=5227 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.350459 kernel: audit: type=1103 audit(1719332681.342:807): pid=5229 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.582547 containerd[1286]: time="2024-06-25T16:24:41.581582271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:41.586684 containerd[1286]: time="2024-06-25T16:24:41.586628798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:24:41.590827 containerd[1286]: time="2024-06-25T16:24:41.590780917Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:41.595531 containerd[1286]: time="2024-06-25T16:24:41.595470556Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:41.598664 containerd[1286]: time="2024-06-25T16:24:41.598601798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:41.599460 containerd[1286]: time="2024-06-25T16:24:41.599422595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.477716312s" Jun 25 16:24:41.599527 containerd[1286]: time="2024-06-25T16:24:41.599465646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:24:41.602414 containerd[1286]: time="2024-06-25T16:24:41.602375027Z" level=info msg="CreateContainer within sandbox \"a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:24:41.624864 containerd[1286]: time="2024-06-25T16:24:41.624250137Z" level=info msg="CreateContainer within sandbox \"a7ac3979a47d6eda02bc08ff71101fd13db76831254e544836d3cbcaa5a3c4a8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7cd3a2f47afd7f910f6e7345acf9c429ad5c1d54c18e2eca0d3886545aa12e17\"" Jun 25 16:24:41.625379 containerd[1286]: time="2024-06-25T16:24:41.625316201Z" level=info msg="StartContainer for \"7cd3a2f47afd7f910f6e7345acf9c429ad5c1d54c18e2eca0d3886545aa12e17\"" Jun 25 16:24:41.630143 sshd[5227]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:41.631000 audit[5227]: USER_END pid=5227 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.634243 systemd[1]: sshd@25-10.0.0.90:22-10.0.0.1:33836.service: Deactivated successfully. Jun 25 16:24:41.635160 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 16:24:41.641315 kernel: audit: type=1106 audit(1719332681.631:808): pid=5227 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.641422 kernel: audit: type=1104 audit(1719332681.631:809): pid=5227 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.631000 audit[5227]: CRED_DISP pid=5227 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:41.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.90:22-10.0.0.1:33836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:41.638380 systemd-logind[1277]: Session 26 logged out. Waiting for processes to exit. Jun 25 16:24:41.639887 systemd-logind[1277]: Removed session 26. Jun 25 16:24:41.659556 systemd[1]: run-containerd-runc-k8s.io-7cd3a2f47afd7f910f6e7345acf9c429ad5c1d54c18e2eca0d3886545aa12e17-runc.fpt6Ul.mount: Deactivated successfully. Jun 25 16:24:41.670535 systemd[1]: Started cri-containerd-7cd3a2f47afd7f910f6e7345acf9c429ad5c1d54c18e2eca0d3886545aa12e17.scope - libcontainer container 7cd3a2f47afd7f910f6e7345acf9c429ad5c1d54c18e2eca0d3886545aa12e17. Jun 25 16:24:41.685000 audit: BPF prog-id=187 op=LOAD Jun 25 16:24:41.685000 audit: BPF prog-id=188 op=LOAD Jun 25 16:24:41.685000 audit[5252]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=5189 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:41.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763643361326634376166643766393130663665373334356163663963 Jun 25 16:24:41.685000 audit: BPF prog-id=189 op=LOAD Jun 25 16:24:41.685000 audit[5252]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=5189 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:41.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763643361326634376166643766393130663665373334356163663963 Jun 25 16:24:41.685000 audit: BPF prog-id=189 op=UNLOAD Jun 25 16:24:41.686000 audit: BPF prog-id=188 op=UNLOAD Jun 25 16:24:41.686000 audit: BPF prog-id=190 op=LOAD Jun 25 16:24:41.686000 audit[5252]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=5189 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:41.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763643361326634376166643766393130663665373334356163663963 Jun 25 16:24:41.725120 containerd[1286]: time="2024-06-25T16:24:41.724986214Z" level=info msg="StartContainer for \"7cd3a2f47afd7f910f6e7345acf9c429ad5c1d54c18e2eca0d3886545aa12e17\" returns successfully" Jun 25 16:24:42.253044 kubelet[2299]: I0625 16:24:42.252966 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d54fcc8f5-mmg2l" podStartSLOduration=1.773726262 podStartE2EDuration="4.252946248s" podCreationTimestamp="2024-06-25 16:24:38 +0000 UTC" firstStartedPulling="2024-06-25 16:24:39.121336321 +0000 UTC m=+95.278566653" lastFinishedPulling="2024-06-25 16:24:41.600556307 +0000 UTC m=+97.757786639" observedRunningTime="2024-06-25 16:24:42.252140019 +0000 UTC m=+98.409370381" watchObservedRunningTime="2024-06-25 16:24:42.252946248 +0000 UTC m=+98.410176580" Jun 25 16:24:42.407000 audit[5283]: NETFILTER_CFG table=filter:130 family=2 entries=10 op=nft_register_rule pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:42.407000 audit[5283]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc0e445f40 a2=0 a3=7ffc0e445f2c items=0 ppid=2502 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:42.407000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:42.409000 audit[5283]: NETFILTER_CFG table=nat:131 family=2 entries=44 op=nft_register_rule pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:42.409000 audit[5283]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffc0e445f40 a2=0 a3=7ffc0e445f2c items=0 ppid=2502 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:42.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:42.564000 audit[5285]: NETFILTER_CFG table=filter:132 family=2 entries=9 op=nft_register_rule pid=5285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:42.564000 audit[5285]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff86a8e0d0 a2=0 a3=7fff86a8e0bc items=0 ppid=2502 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:42.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:42.566000 audit[5285]: NETFILTER_CFG table=nat:133 family=2 entries=51 op=nft_register_chain pid=5285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:42.566000 audit[5285]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7fff86a8e0d0 a2=0 a3=7fff86a8e0bc items=0 ppid=2502 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:42.566000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:43.581000 audit[5287]: NETFILTER_CFG table=filter:134 family=2 entries=8 op=nft_register_rule pid=5287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:43.581000 audit[5287]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffda3248f60 a2=0 a3=7ffda3248f4c items=0 ppid=2502 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:43.581000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:43.583000 audit[5287]: NETFILTER_CFG table=nat:135 family=2 entries=58 op=nft_register_chain pid=5287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:43.583000 audit[5287]: SYSCALL arch=c000003e syscall=46 success=yes exit=20452 a0=3 a1=7ffda3248f60 a2=0 a3=7ffda3248f4c items=0 ppid=2502 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:43.583000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:46.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.90:22-10.0.0.1:60804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.650902 systemd[1]: Started sshd@26-10.0.0.90:22-10.0.0.1:60804.service - OpenSSH per-connection server daemon (10.0.0.1:60804). Jun 25 16:24:46.651808 kernel: kauditd_printk_skb: 31 callbacks suppressed Jun 25 16:24:46.651857 kernel: audit: type=1130 audit(1719332686.649:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.90:22-10.0.0.1:60804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.687000 audit[5317]: USER_ACCT pid=5317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.688371 sshd[5317]: Accepted publickey for core from 10.0.0.1 port 60804 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:46.689795 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:46.688000 audit[5317]: CRED_ACQ pid=5317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.694682 systemd-logind[1277]: New session 27 of user core. Jun 25 16:24:46.695450 kernel: audit: type=1101 audit(1719332686.687:824): pid=5317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.695493 kernel: audit: type=1103 audit(1719332686.688:825): pid=5317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.695514 kernel: audit: type=1006 audit(1719332686.688:826): pid=5317 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 16:24:46.688000 audit[5317]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbda504f0 a2=3 a3=7faaffab2480 items=0 ppid=1 pid=5317 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:46.700213 kernel: audit: type=1300 audit(1719332686.688:826): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbda504f0 a2=3 a3=7faaffab2480 items=0 ppid=1 pid=5317 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:46.700281 kernel: audit: type=1327 audit(1719332686.688:826): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:46.688000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:46.705258 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 16:24:46.708000 audit[5317]: USER_START pid=5317 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.710000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.718728 kernel: audit: type=1105 audit(1719332686.708:827): pid=5317 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.718826 kernel: audit: type=1103 audit(1719332686.710:828): pid=5319 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.825113 sshd[5317]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:46.825000 audit[5317]: USER_END pid=5317 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.828222 systemd[1]: sshd@26-10.0.0.90:22-10.0.0.1:60804.service: Deactivated successfully. Jun 25 16:24:46.829012 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 16:24:46.830041 systemd-logind[1277]: Session 27 logged out. Waiting for processes to exit. Jun 25 16:24:46.825000 audit[5317]: CRED_DISP pid=5317 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.830841 systemd-logind[1277]: Removed session 27. Jun 25 16:24:46.832956 kernel: audit: type=1106 audit(1719332686.825:829): pid=5317 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.833026 kernel: audit: type=1104 audit(1719332686.825:830): pid=5317 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:46.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.90:22-10.0.0.1:60804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:51.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.90:22-10.0.0.1:60816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:51.840179 systemd[1]: Started sshd@27-10.0.0.90:22-10.0.0.1:60816.service - OpenSSH per-connection server daemon (10.0.0.1:60816). Jun 25 16:24:51.920358 kubelet[2299]: E0625 16:24:51.920300 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:52.308871 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:24:52.308970 kernel: audit: type=1130 audit(1719332691.838:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.90:22-10.0.0.1:60816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.568000 audit[5334]: USER_ACCT pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.571905 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:52.572718 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 60816 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:24:52.612119 systemd-logind[1277]: New session 28 of user core. Jun 25 16:24:52.633864 kernel: audit: type=1101 audit(1719332692.568:833): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.633915 kernel: audit: type=1103 audit(1719332692.569:834): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.633973 kernel: audit: type=1006 audit(1719332692.569:835): pid=5334 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jun 25 16:24:52.634013 kernel: audit: type=1300 audit(1719332692.569:835): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe988c35e0 a2=3 a3=7f2ff7c8d480 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:52.634188 kernel: audit: type=1327 audit(1719332692.569:835): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:52.569000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.569000 audit[5334]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe988c35e0 a2=3 a3=7f2ff7c8d480 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:52.569000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:52.638465 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 16:24:52.671000 audit[5334]: USER_START pid=5334 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.681000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.692985 kernel: audit: type=1105 audit(1719332692.671:836): pid=5334 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.693799 kernel: audit: type=1103 audit(1719332692.681:837): pid=5336 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.950527 sshd[5334]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:52.951000 audit[5334]: USER_END pid=5334 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.957609 systemd[1]: sshd@27-10.0.0.90:22-10.0.0.1:60816.service: Deactivated successfully. Jun 25 16:24:52.959955 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 16:24:52.967712 systemd-logind[1277]: Session 28 logged out. Waiting for processes to exit. Jun 25 16:24:52.973381 systemd-logind[1277]: Removed session 28. Jun 25 16:24:53.017196 kernel: audit: type=1106 audit(1719332692.951:838): pid=5334 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.952000 audit[5334]: CRED_DISP pid=5334 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.90:22-10.0.0.1:60816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:53.041100 kernel: audit: type=1104 audit(1719332692.952:839): pid=5334 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success'